In Pursuit of Laziness

Manish Goregaokar's blog

Integrating Rust and C++ in Firefox

This post was originally drafted in August 2018, but I never got around to finishing it. As such, parts of its framing (e.g. the focus on bindgen) are outdated, given the relative infancy of the interop space at the time. I was recently told that the post is still useful in this form so I decided to finish and publish it anyway, while attempting to mark outdated things as such when I notice them. Everything after the allocators section was written near the time of publication.

In 2017 I worked on the Stylo project, uplifting Servo’s CSS engine (“style system”) into Firefox’s browser engine (“Gecko”). This involved a lot of gnarly FFI between Servo’s Rust codebase and Firefox’s C++ codebase. There were a lot of challenges in doing this, and I feel like it’s worth sharing things from our experiences.

If you’re interested in Rust integrations, you may find this talk by Katharina on Rust - C++ FFI, and this blog post by Henri on integrating encoding-rs into Firefox useful as well.

Who is this post for?

So, first off the bat, I’ll mention that when integrating Rust into a C++ codebase, you want to avoid having integrations as tight as Stylo. Don’t do what we did; make your Rust component mostly self-contained so that you just have to maintain something like ten FFI functions for interacting with it. If this is possible to do, you should do it and your life will be much easier. Pick a clean API boundary, define a straightforward API, use cbindgen or bindgen if necessary without any tricks, and you should be good to go.

That said, sometimes you have to have gnarly integrations, and this blog post is for those use cases. These techniques mostly use bindgen in their examples, however you can potentially use them with hand-rolled bindings or another tool as well. If you’re at this level of complexity, however, the potential for mistakes in the hand-rolled bindings is probably not worth it.

Note from 2021: cxx is probably a better tool for many of the use cases here, though many of the techniques still transfer.

What was involved in Stylo’s FFI?

So, what made Stylo’s FFI so complicated?

It turns out that browsers are quite monolithic. You can split them into vaguely-defined components, but these components are still tightly integrated. If you intend to replace a component, you may need to make a jagged edge of an integration surface.

The style system is more self-contained than other parts, but it’s still quite tightly integrated.

The main job of a “style system” is to take the CSS rules and DOM tree, and run them through “the cascade”1 with an output of “computed styles” tagged on each node in the tree. So, for example, it will take a document like the following:

<style type="text/css">
    body {
        font-size: 12px;
    }
    div {
        height: 2em;
    }
</style>
<body>
    <div id="foo"></div>

</body>

and turn it into something like:

  • <body> has a font-size of 12px, everything else is the default
  • the div #foo has a computed height of 24px 2, everything else is the default. It “inherits” the font-size from <body> as 12px

From a code point of view, this means that Stylo takes in Gecko’s C++ DOM tree. It parses all the CSS, and then runs the cascade on the tree. It stores computed styles on each element in a way that Gecko can read very cheaply.

Style computation can involve some complex steps that require calling back into C++ code. Servo’s style system is multithreaded, but Gecko is mostly designed to work off of a single main thread per process, so we need to deal with this impedence mismatch.

Since the output of Stylo is C++-readable structs, Stylo needs to be able to read and write nontrivial C++ abstractions. Typical FFI involves passing values over a boundary, never to be seen again, however here we’re dealing with persistent state that is accessed by both sides.

To sum up, we have:

  • Lots and lots of back-and-forth FFI
  • Thread safety concerns
  • Rust code regularly dealing with nontrivial C++ abstractions
  • A need for nontrivial abstractions to be passed over FFI

All of this conspires to make for some really complicated FFI code.

The actual techniques

I’ll try to structure this so that the more broadly useful (and/or less gnarly) techniques come earlier in the post.

The basics of bindgen

Bindgen is a tool that generates Rust bindings for structs and functions from the provided C or C++ header files. It’s often used for writing Rust bindings to existing C/C++ libraries, however it’s useful for integrations as well.

To use it for an integration, write a header file containing the functions your Rust code needs (referencing structs from other header files if necessary), and run bindgen on it. For some codebases, doing this once and checking in the generate file suffices, but if your C++ code is going to change a lot, run it as a build dependency instead. Beware that this can adversely impact build times, since your Rust build now has a partial C++ compilation step.

For large C++ codebases, pulling in a single header will likely pull in a lot of stuff. You should allowlist, blocklist, and/or mark things as opaque to reduce the amount of bindings generated. It’s best to go the allowlisting route — give bindgen an allowlisted list of functions / structs to generate bindings for, and it will transitively generate bindings for any dependencies they may have. Sometimes even this will end up generating a lot, it’s sometimes worth finding structs you’re not using and marking them as opaque so that their bindings aren’t necessary. Marking something as opaque replaces it with an array of the appropriate size and alignment, so from the Rust side it’s just some bits you don’t care about and can’t introspect further.

Bindgen does support some C++ features (you may need to pass -x c++). This is pretty good for generating bindings to e.g. templated structs. However, it’s not possible to support all C++ features here, so you may need to blocklist, opaqueify, or use intermediate types if you have some complicated C++ abstractions in the deps. You’ll typically get an error when generating bindings or when compiling the generated bindings, so don’t worry about this unless that happens.

Bindgen is quite configurable. Stylo has a script that consumes a large toml file containing all of the configuration.

cbindgen

We don’t use cbindgen in Stylo, but it’s used for Webrender. It does the inverse of what bindgen does: given a Rust crate, it generates C headers for its public extern "C" API. It’s also quite configurable.

cxx

cxx is the cool new hotness in 2021, which kind of approaches the problem from both sides, enabling you to write Rust bindings for C++ and C++ bindings for Rust. It’s definitely worth checking out, a lot of the things that are hard to make work with bindgen are trivial in cxx. For example, it automatically figures out what types need to be opaque, it automatically converts between &T and T* across FFI, and it is overall more targeted for the use case of an FFI layer where Rust and C++ both call each other.

Bindgen-aided C++ calling Rust

So bindgen helps with creating things for Rust to call and manipulate, but not in the opposite direction. cbindgen can help here, but I’m not sure if it’s advisable to have both bindgen and cbindgen operating near each other on the same codebase.

In Stylo we use a bit of a hack for this. Firstly, all FFI functions defined in C++ that Rust calls are declared in one file, and are all named Gecko_*. Bindgen supports regexes for things like allowlisting, so this naming scheme makes it easy to deal with.

We also declare the FFI functions defined in Rust that C++ calls in another file, named Servo_*. They’re also all defined in one place.

However, there’s nothing ensuring that the signatures match! If we’re not careful, there may be mismatches, causing bad things to happen at link time or runtime. We use a small autogenerated unit test to ensure the validity of the signatures.

This is especially important as we do things like type replacement, and we need tests to ensure that the rug isn’t pulled out from underneath us.

Type replacing for fun and profit

Using blocklisting in conjunction with the --raw-line/raw_line() flag, one can effectively ask bindgen to “replace” types. Blocklisting asks bindgen not to generate bindings for a type, however bindgen will continue to generate bindings referring to that type if necessary. (Unlike opaque types where bindgen generates an opaque binding for the type and uses it everywhere). --raw-line lets you request bindgen to add a line of raw rust code to the file, and such a line can potentially define or import a new version of the type you blocklisted. Effectively, this lets you replace types.

Bindgen generates unit tests ensuring that the layout of your structs is correct (run them!), so if you accidentally replace a type with something incompatible, you will get warnings at the struct level (functions may not warn).

There are various ways this can be used:

Safe references across FFI

Note from 2021: cxx does this automatically

Calling into C++ (and accepting data from C++) is unsafe. However, there’s no reason we should have to worry about this more than we have to. For example, it would be nice if accessor FFI functions – functions which take a foreign object and return something from inside it – could use lifetimes. It would be even nicer if nullability were represented on the FFI boundary so that you don’t miss null checks, and can assume non-nullness when the C++ API is okay with it.

In Stylo, we have lots of functions like the following:

RawGeckoNodeBorrowedOrNull Gecko_GetLastChild(RawGeckoNodeBorrowed node);

which bindgen translates to:

extern "C" {
    fn Gecko_GetLastChild(x: &RawGeckoNode) -> Option<&RawGeckoNode>;   
}

Using the bindgen build script on a provided list of borrow-able types, we’ve told bindgen that:

  • FooBorrowedOrNull is actually Option<&Foo>
  • FooBorrowed is actually &Foo

Option<&Foo> is represented as a single nullable pointer in Rust, so this is a clean translation. We’re forced to null-check it, but once we do we can safely assume that the reference is valid. Furthermore, due to lifetime elision the actual signature of the FFI function is fn Gecko_GetLastChild<'a>(x: &'a RawGeckoNode) -> Option<&'a RawGeckoNode>, which ensures we won’t let the returned reference outlive the passed reference. Lifetime elision means that we can call C++ functions “safely” with the appropriate lifetime requirements, even though C++ has no such concept!

Note that this is shifting some of the safety invariants to the C++ side: We rely on the C++ to give us valid references, and we rely on it to not have nulls when the type is not marked as nullable. Most C++ codebases internally rely on such invariants for safety anyway, so this isn’t much of a stretch.

We do this on both sides, actually: Many of our Rust-defined extern "C" functions that C++ calls get to be internally-safe because the types let us assume the validity of the pointers obtained from C++.

Making C++ abstractions Rust-accessible

A very useful thing to do here is to replace various C++ abstractions with Rust versions of them that share semantics. In Gecko, most strings are stored in nsString/nsAString/etc.

We’ve written an nsstring crate that represents layout-compatible nsStrings in a more Rusty way, with Rusty APIs. We then ask bindgen to replace Gecko nsStrings with these.

Usually it’s easier to just write an impl for the bindgen-generated abstraction, however sometimes you must replace it:

  • When the abstraction internally does a lot of template stuff not supported by bindgen
  • When you want the code for the abstraction to be in a separate crate

Potential pitfall: Passing C++ classes by-value over FFI

It’s quite tempting to do stuff like

RefPtr<Foo> Servo_Gimme(...);

where you pass complicated classes by-value over FFI (RefPtr is Gecko’s variant of Rc<T>/Arc<T>).

This works on some systems, but is broken on MSVC: The ABI for passing non-POD types through functions is different. The linker usually notices this and complains, but it’s worth avoiding this entirely.

In Stylo we handle this by using some macro-generated intermediate types which are basically the same thing as the original class but without any constructors/destructors/operators. We convert to/from these types immediately before/after the FFI call, and on the Rust side we do similar conversions to Rust-compatible abstractions.

Sharing abstractions with destructors

If you’re passing ownership of collections or other templated types across FFI, you probably want Rust code to be able to destroy C++ objects, and vice versa.

One way of doing this is to implement Drop on the generated struct. If you have class MyString, you can do:

class MyString {
    // ...
    ~MyString();
}

void MyString_Destroy(*MyString x) {
    x->~MyString()
}
impl Drop for bindings::MyString {
    fn drop(&mut self) {
        // (bindgen only)
        bindings::MyString::destruct(self)
        // OR
        bindings::MyString_Destroy(self)
    }
}

The MyString_Destroy isn’t necessary with bindgen – bindgen will generate a MyString::destruct() function for you – but be careful, this will make your generated bindings very platform-specific, so be sure to only do this if running them at build time. In general, when bindgen generates C++ methods, your bindings become platform specific and are best regenerated at build time3.

In Stylo we went down the route of manually defining _Destroy() functions since we started off with checked-in platform-agnostic bindings, however we could probably switch to using destruct() if we want to now.

When it comes to generic types, it’s a bit trickier, since Drop can’t be implemented piecewise on a generic type (you cannot impl Drop for MyVector<Foo>). You have to do something like:

template<typename T>
class MyVector {
    // ...
}

// Deallocate buffer, but do not call destructors on elements
void MyVector_Deallocate_Buffer(MyVector<void>* x);
// assume we have an implementation of Iterator for MyVector<T> somewhere

impl<T> Drop for bindings::MyVector<T> {
    fn drop(&mut self) {
        for v in self.iter_mut() {
            // calls the destructor for `v`, if any
            std::ptr::drop_in_place(v)
        }
        bindings::MyVector_Deallocate_Buffer(self as *mut MyVector<T> as *mut MyVector<c_void>)
    }
}

Note that if you forget to add a Drop implementation for T, this will silently forget to clean up the contents of the vector. See the next section for some ways to handle this by creating a “safe” mirror type.

Mirror types

C++ libraries often have useful templated abstractions, and it’s nice to be able to manipulate them from Rust. Sometimes, it’s possible to just tack on semantics on the Rust side (either by adding an implementation or by doing type replacement), but in some cases this is tricky.

For example, Gecko has RefPtr<T>, which is similar to Rc<T>, except the actual refcounting logic is up to T to implement (it can choose between threadsafe, non-threadsafe, etc), which it does by writing AddRef() and Release() methods.

We mirror this in Rust by having a trait:

/// Trait for all objects that have Addref() and Release
/// methods and can be placed inside RefPtr<T>
pub unsafe trait RefCounted {
    /// Bump the reference count.
    fn addref(&self);
    /// Decrease the reference count.
    unsafe fn release(&self);
}

/// A custom RefPtr implementation to take into account Drop semantics and
/// a bit less-painful memory management.
pub struct RefPtr<T: RefCounted> {
    ptr: *mut T,
    _marker: PhantomData<T>,
}

We implement the RefCounted trait for C++ types that are wrapped in RefPtr which we wish to access through Rust. We have some macros that make this easier to do. We have to have such a trait, because otherwise Rust code wouldn’t know how to manage various C++ types.

However, RefPtr<T> here can’t be the type that ends up being used in bindgen. Rust doesnt let us do things like impl<T: RefCounted> Drop for RefPtr<T> 4, so we can’t effectively make this work with the bindgen generated type unless we write a RefCounted implementation for every refcounted type that shows up in the bindgen output at all – which would be a lot of work.

Instead, we let bindgen generate its own RefPtr<T>, called structs::RefPtr<T> (all the structs that bindgen generates for Gecko go in a structs:: module). structs::RefPtr<T> itself doesn’t have enough semantics to be something we can pass around willy-nilly in Rust code without causing leaks. However, it has some methods that allow for conversion into the “safe” mirror RefPtr<T> (but only if T: RefCounted). So if you need to manipulate a RefPtr<T> in a C++ struct somewhere, you immediately use one of the conversion methods to get a safe version of it first, and then do things to it. Refcounted types that don’t have the RefCounted implementation won’t have conversion methods: they may exist in the data you’re manipulating, however you won’t be able to work with them.

In general, whenever attaching extra semantics to generic bindgen types doesn’t work create a mirror type that’s completely safe to use from Rust, with a trait that gates conversion to the mirror type.

Potential pitfall: Allocators

If you’re passing heap-managed abstractions across FFI, be careful about which code frees which objects. If your Rust and C++ code don’t share allocators, deallocating memory allocated on the other side can have disastrous consequences.

If you’re building a cdylib or staticlib with Rust (this is likely if you’re linking it with a C++ application), the compiler will by default pick the system allocator (malloc), so if your C++ application also uses the same you’re all set.

On some platforms when building rlibs and binaries, Rust may choose jemalloc instead. It’s also possible that your C++ code uses a different allocator (lots of applications use allocators like jemalloc or tcmalloc, some have their own custom allocators like tor_malloc in Tor).

In such cases you have one of three options:

  • Avoid transferring ownership of heap-allocated items, only share things as borrowed references
  • Call destructors over FFI, as detailed in the section on destructors above
  • Set Rust’s allocator to be the same as documented in the std::alloc module. Basically, can use the #[global_allocator] attribute to select which allocator you wish to use, and if necessary you can implement the GlobalAlloc trait on a custom allocator type that calls into whatever custom allocator C++ is using.

Note from 2021: Most stdlib collections (Vec, for example) now have an optional “custom allocator” parameter that can be used to swap in a different allocator for a specific use site.

Arcs over FFI: Triomphe

This isn’t really a generalizable technique, but it’s pretty cool and generally instructive, so I’m including it here.

Stylo uses a lot of Arcs. A lot of them. The entire computation of styles makes heavy use of Arc::make_mut’s copy-on-write semantics so that we can build up the style tree in parallel but not have to make unnecessary copies of duplicated/defaulted styles for each element.

Many of these Arcs need to be readable from C++. Rust’s Arc, however, consists of a pointer to an allocation containing a refcount and the data, so if C++ needs to get access to the data it needs to know the layout of the Arc allocation, which we’d rather not do5.

We picked a different route: We created a crate duplicating Arc<T> which behaves almost exactly the same as Arc<T>, but it can be converted to OffsetArc<T> which has its pointer point to the middle of the allocation, where the T begins. To C++, this just looks like a *const T! We were then able to make it work with RefPtr<T> on the C++ side so that C++ can transparently read from the OffsetArc<T>, and only needs to call into Rust if it wishes to clone or drop it.

The external version of this crate can be found in triomphe. It contains a bunch of other goodies that are additionally useful outside of the FFI world, like ArcBorrow which is essentially “&Arc<T> without double indirection”, UniqueArc<T>, a mutable Arc<T> known to be uniquely owned, and ArcUnion<T, U>, which is a space-efficient union of Arc<T> and Arc<U>.

Other pitfalls

Transparent

It’s very tempting to wrap C++ types in tuple structs and pass them over FFI. For example, one might imagine that the following is okay:

struct Wrapper(bindings::SomeCppType);

extern "C" {
    // C++ signature: `SomeCppType get_cpp_type();`
    fn get_cpp_type() -> Wrapper;
}

This kind of thing is quite useful to get around coherence, or for adding additional semantics to a type.

While there’s basically one obvious way Wrapper can be represented, ABI stuff can be tricky, and Rust’s layout isn’t defined. It is safer to use #[repr(transparent)], which guarantees that Wrapper will have the same representation as the type it contains.

C enums

Rust supports C-like enums, but there’s a crucial difference between them. In C, it is not undefined behavior for an enum to have an unlisted value. In fact, the following pattern is not uncommon:

enum Flags {
    Flag1 = 0b0001,
    Flag2 = 0b0010,
    Flag3 = 0b0100,
    Flag4 = 0b1000;
};

where the enum is actually used for bitflags, and Flag1 | Flag2 and 0 are both valid values for Flags.

This is not the case in Rust. If you are type-replacing C enums with Rust ones, make sure they are #[repr(C)]. The Rust compiler uses invalid enum values as space for packing other information while optimizing types, for example Rust is able to represent Option<Option<... 255 times .. Option<bool>> as a single byte.

If you are working with a C enum that is used for bitflags like above, please use an integer type instead. #[repr(C)] on enums in Rust guarantees layout, but it is still undefined behavior for any enum to take on invalid values.

ABI concerns

ABIs can be tricky. If you just use bindgen with no special flags, you can be pretty much guaranteed to have an okay ABI, but as you start doing type replacements, stuff can get murkier.

Firstly, make sure you’re not passing owned C++ classes with destructors/etc across FFI boundaries. See above for why. There’s a bunch of subtle stuff here, but you can avoid most of it it if you just don’t pass these things across FFI in an owned way.

Also, try to make sure everything is #[repr(C)] across the boundary. Rust’s improper-ctypes lints will help here.

Should C++ APIs be unconditionally unsafe?

Before I get into this, I want to reiterate that most of the recommendations in this post are for complex C++-Rust integrations, which are likely to only crop up when attempting to rewrite parts of a large C++ codebase in Rust. Such codebases have unique needs and it’s important to calibrate for that when judging what’s right for them.

I recall when this Chromium post and Steve’s cxx post came out, there was a bunch of discussion about C++ functions not being universally marked unsafe. Essentially, a lot of people are of the opinion that all FFI into C++ (or C) should be unconditionally marked unsafe (and that tools like cxx should follow these rules).

Back then I wrote a Reddit comment about my thoughts on this. It’s a comment that’s the length of a blog post in and of itself so I’m not going to reproduce all of it here, but I’ll try to get the gist. I highly suggest you read it instead of this section.

In short, I would recommend people in large, complex codebases doing heavy C++ interop to generally be okay with marking functions calling into C++ as “safe” provided that function would be considered “safe to call without thinking too much about it” on the C++ side, whatever that means for your codebase.

From my post on “undefined” vs “unsafe”, for Rust I define “safe” as

Basically, in Rust a bit of code is “safe” if it cannot exhibit undefined behavior under all circumstances of that code being used.

C++ doesn’t have a rigid language-level concept of safety that can be applied the same way. Instead, most C++ code follows a similar heuristic:

a bit of code is “safe” if it cannot exhibit undefined behavior under all expected circumstances of that code being used.

This is, perhaps, not as good or useful a heuristic as the one we have for Rust, but it’s still a heuristic that gets used in deciding how careful one needs to be when using various APIs. After all, there are plenty of giant C++ codebases out there, they have got to be able to reason about safety somehow.

When you decide to meld together a C++ and Rust codebase, or start rewriting parts of a C++ codebase in Rust, you have already in essence decided for a large part of the codebase to not exactly follow Rust’s safety rules (but hopefully still be safe). There is little to be gained by making that an explicit part of your FFI boundary. Rather, it is more useful to save unsafe on the FFI boundary for truly unsafe functions which you actually do need to be careful to call.

unsafe is useful for finding potential sources of badness in your codebase. For a tightly-integrated Rust/C++ codebase it’s already well known that the C++-side is introducing badness, marking every simple C++ getter as unsafe will lead to alarm fatigue and make it harder to find the real problems.

It’s worth figuring out where this boundary lies for you. Tools like cxx make it straightforward to call C++ functions through a safe interface, and it’s valuable to make use of that support.

Closing comments

Again, before going down this route it’s worth wondering if you really need tight Rust-C++ integration. When possible, it’s always better to pick a small, well-defined API boundary, rather than Stylo-esque tight integration with shared objects and a highly criscrossed callgraph.

These days cxx is probably the most complete tool for such integrations. bindgen and cbindgen are still quite good, but cxx is C++-first, with a lot more magic, and generally seems to Just Work without too much configuration.

autocxx is a cool concept by Adrian Taylor which melds bindgen and cxx to make something even more magical. It’s currently experimental, but I’m going to be watching it with interest.

Overall the field of Rust and C++ integration is at a stage where it’s mature enough for integrations to be possible without too much effort, but there are still tons of ways things could be improved and I’m super excited to see that happen as more people work on such integrations!

Thanks to Adam Perry, Adrian Taylor, katie martin, Nika Layzell, and Tyler Mandry for reviewing drafts of this post

  1. The cascade in “Cascading Style Sheets” is the process used to take all the potential rules which could apply to an element and find the “most applicable” one that gets actually used. 

  2. The em unit is font-size-relative, so 2em with a font-size of 12px is computed to 2 * 12 = 24px

  3. C++ name mangling is not standardized, so any function with the C++ ABI will generate a #[link_name = "_Z1foobarbaz"] attribute on the Rust side, and the exact string used here will differ across compiler implementations and platforms. Since GCC and Clang follow the same scheme, most people will encounter this problem when their code doesn’t work on Windows due to MSVC using a different scheme. 

  4. Drop impls are restricted in a bunch of ways for safety, in particular you cannot write impl<T: RefCounted> Drop for RefPtr<T> unless RefPtr is defined as RefPtr<T: RefCounted>. It’s not possible to have a generic type that has an impl of Drop for only some possible instantiations of its generics. 

  5. Rust’s standard library does not typically guarantee anything about the layout of its types, and furthermore, Rust does not make many guarantees about the stability of most types without a #[repr] attribute. This would work, but it would be brittle and prone to breakage. 

On Voting Systems

Election season is starting up again, and as with many other topics I’m seeing a lot of overconfident takes from people in tech wanting to “solve” how voting works with naïve techy solutions. Hell, even a presidential candidate seems to have proposed an extremely uninformed plan for “fixing” voting using blockchain technology.

Last year I wrote a thread on Twitter covering some of the essential properties good voting systems uphold as well as how they prevent fraud. It was through the lens of Alameda County’s voting system, where I’ve volunteered as a poll worker in the past (and intend to do again). I’ve been meaning to write down the contents of that thread in blog form for a while, and now seemed like a good opportunity to do it.

I’ll be explaining more about most of these properties later, but ideally, a good voting system should uphold:

  • Secret ballot: Nobody, not even you, can verify who you voted for after you’re out of the polling place, to prevent vote-buying and coercion.
  • Auditable paper trail: We should be able to audit the election. Paper trails are usually the most robust way to enable effective audits.
  • Obviousness: It should be relatively obvious what individuals should be doing when they need to mark their ballots. A system that you can easily “mess up” with is a bad system.
  • Accessibility: It should not exclude individuals with disabilities from being able to vote.

How voting works in Alameda County

I’ll first go over how voting in my county works. The system isn’t perfect, but it’s pretty good, and it’s a good springboard for understanding how voting systems in general can work. There’s a poll worker guide you can refer to if you’re really interested in all the specifics.

Broadly speaking, there are four ways to vote:

  • By mail
  • In person at certain government offices, before election day (“early voting”)
  • In person on election day at a polling place
  • Provisionally, in person on election day at a polling place

Voting by mail is pretty straightforward: When you register you can choose to vote by mail (or you can choose to do so online after the fact). You get a ballot in the mail, along with a special envelope. You fill in the ballot at your leisure, stick it in the envelope, write your name/address on the envelope, sign it, and mail it back. There are also convenient ballot dropboxes all over the place in case you’re a millenial like me and don’t want to figure out how to buy stamps1.

If you’re voting by mail you can also show up at any polling place on the day of the election and drop off your ballots in a sealed bin. At the polling place I helped run roughly half of the people coming in were just there to drop off their vote by mail ballots!

Voting by mail is by far the easiest option here. Sadly not all counties support it2. In some states this is even the default option.

As I understand it, voting in person at designated government offices3 is pretty much the same as voting in person at a polling place, it’s just run by government employees instead of volunteers and open for a few weeks before election day.

Poll workers are given some neat bling to wear

In person voting

If you’ve chosen to vote in person, you are supposed to turn up at your assigned polling place (you get your assignment in the mail along with other voter info booklets).

There’s a copy of the list of people assigned to the polling place posted outside, and another with the poll workers inside. When you tell your name to the poll workers, they cross your name off the list, and you have to sign your name next to it4.

  • If your name isn’t on the list, the poll workers will try and find your assigned precinct and inform you that you can go there instead, but you can still choose to vote provisionally at the existing precinct.
  • If your name isn’t on the list of all voters (perhaps you registered very late, or were unable to register), you can also vote provisionally.
  • If your name is on the list but marked as voting-by-mail (and you want to vote in person), you can vote normally only if you surrender your mail ballot (which poll workers will mark as spoiled and put in a separate pouch).
  • If you lost/didn’t receive your ballot, you can always vote provisionally.

When you are voting normally, signing your name on the list fraudulently is illegal.

If it is your first time voting, you need to show some form of ID, but it doesn’t need to be photo ID and even a utility bill is fine.

Once you’re done signing, you’ll be given your ballot cards and a privacy sleeve folder so you can carry your filled ballots around. Because this is California and there are tons of local and state measures, we had 4 (!!) ballot cards, six sides to fill in5. Usually a poll worker will also detach the ballot stubs in front of you and hand them to you to keep. You can use these to check the status (but not the contents!) of your ballot online.

You take your cards to a voting booth, fill them in, and come back. A poll worker will then help you feed your ballot cards into a scanner machine. This machine will reject cards with any problems — which you can fix, rerequesting new ballot cards if necessary, but you then have to spoil and return the old ballot card.

The machine keeps an externally-visible tally of the number of ballots submitted, and an internal tally of all the votes made, ignoring write-ins. It also internally stores ballot cards in one of two bins (depending on write-ins). These bins are verified to be empty when polls open, and are inaccessible till polls close.

It’s important to note that the scanner is not a load-bearing component of the system: It could be replaced with a locked bin with a slot, and the system would still work. The scanner enables one to get preliminary results for the precinct, and provides a way to double-check results.

And that’s it! You’ll be given an I Voted sticker, and you can go home!

Some “I Voted!” stickers in Spanish

Using a voting machine

In case you think you will have trouble filling out a ballot card in pen (e.g. if your vision is heavily impared), there’s an alternate way to vote that doesn’t involve a pen. Instead, we have a machine which has a touchscreen and an audio unit, which prompts the voter for their selection for each ballot item on the touchscreen or audio unit. When they’re done, the machine will print out a “receipt” listing their choices inside a sealed box with a glass window, so they can verify that their vote was recorded correctly6. Once they’re done the sealed box will scroll the “receipt” out of view so that the next voter can’t see it.

The sealed box is called a Voter-Verified Paper Trail box: the election runners no longer need to trust the machine’s internal memory, they can trust the paper trail inside the box (which, while produced by a potentially-untrustworthy machine, was verified by the voters), and the machine’s internal memory is simply a way to double-check (and get fast preliminary results).

Provisional voting

There are many, many situations in which you may not be able to vote normally. Perhaps you showed up at the wrong precinct but don’t have time to go to the right one. Perhaps you were signed up for vote-by-mail but didn’t receive (or lost) your ballot. Perhaps you recently moved into the county and weren’t able to register in time. Perhaps you were a first-time in-person voter and didn’t have some form of ID.

In such a case you can always vote provisionally. The beauty of this system is that it removes most liability from poll workers: we don’t have any reason to turn people away from the polls, all we can do is refuse to let people vote normally (and instead vote provisionally) in the case of any inconsistencies. This is not to say that malicious poll workers can’t turn people away; it’s illegal but it happens. But well-meaning poll workers cannot, by accident, disenfranchise a voter because we are always allowed to give them a provisional ballot, and that’s an easy rule to follow.

With provisional voting, the voters are given the same ballot cards, but they’re also given an envelope with a form on it. This envelope is equivalent to a voter registration form, (re)registering them in their appropriate county/district7. They vote on the ballot cards normally, but instead of submitting the ballots to the scanner, they put them in the envelope, which goes into a sealed bin8. You’re also given a number you can call to check the status of your ballot.

When you vote provisionally, the registrar of voters will manually process your envelope, remaking your ballot on the right set of cards if necessary, and feeding them into a scanner machine.

Integrity checks

Underlying this system is a bevy of integrity checks. There’s an intricate seal system, with numbered seals of varying colors. Some are to be added and never removed, some are to be removed after checking the number, some are never supposed to be touched, some are added at the beginning of the day and removed at the end of the day.

For example, during setup we check that the bins in the scanner are empty, and seal it with a numbered seal. This number is noted down on a form, along with some numbers from the scanner/touchscreen display. The first person to vote is asked to verify all this, and signs the form along with the poll workers.

Election officials drop in multiple times during the day, and may check these numbers. At the end of the day, the numbers of all seals used, and any physical seals that were removed are sent back along with all the ballots.

Various ballot counts are also kept track of. We keep track of the number of provisional ballots, the number of submitted regular ballots (also kept track by the scanner), the number of ballot cards used, and the number of unused ballots left over. Everything needs to match up at the end of the day, and all unused ballots are sent back. These counts are also noted down.

Poll watchers are allowed to be around for most of this, though I think they’re not allowed to touch anything. I think poll watchers are also allowed to be around when the actual ballots are being counted by election officials.

Immediate local results

As mentioned before, the scanner isn’t a crucial part of the system, but if it happens to be working it can be used to get immediate local results. At the end of the day, the scanner prints out a bunch of stuff, including vote totals for races which got more than N votes (N=20, IIRC), so you get immediate results for your precinct. This printout is supposed to be taped to the polling place doors for everyone to see, and presumably the registrar of voters uses the copy submitted to them to publish quick preliminary results.

Using paper ballots doesn’t mean that we have to give up all the benefits of computers doing some of the work for us! We can still use computers to get fast results, without relying on them for the integrity of the system.

Vote totals posted outside. Our ballots are big and have lots of races on them; so the list of vote totals is absolutely ginormous.

Properties of this voting system

This system has some crucial properties.

Secret ballot

It’s well known that nobody is supposed to be able to see who you voted for. But a crucial part of this system is that, once you submit your ballots, you can’t see who you voted for either. Of course, you probably can remember, but you have no proof. On the face of it this sounds like a bad property — wouldn’t it be nicer if people could verify that their vote was counted correctly?

The problem is that if I can verify that my vote was counted correctly, someone else can coerce me into doing this in front of them to ensure I voted a certain way. Any system that gives me the ability to verify my vote gives people who have power over me (or just people who want to buy my vote) the same ability.

Provisional voting doesn’t quite have this property, but it’s supposed to be for edge cases. Vote by mail trades off some of this property for convenience; people can now see who you voted for while you’re voting (and the people you live with can fradulently vote on your behalf, too).

Conservation of ballots (Auditable paper trail)

The total number of ballots in the system is roughly conserved and kept track of. If you’re registered to vote by mail, you cannot request a normal ballot without surrendering your vote by mail ballot and envelope (which we mark as spoiled and put in a separate pouch). If you re-request a ballot card because you made a mistake, the old card needs to be similarly spoiled and put away separately. It’s one set of ballot cards per voter, and almost all potential aberrations in this property result in a provisional vote9. Even provisional votes are converted to normal ballot cards in the end.

Eventually, there will be a giant roomful of ballots that cannot be traced back to their individual voters, but it can still be traced back to the entirety of the voters — it’s hard to put a ballot into the system without a corresponding voter. This is perfect — the ballots can be hand-counted, but they can’t be individually corellated with their respective voters.

You don’t even need to recount the entire set of ballots to perform an audit, risk limiting audits are quite effective and much more efficient to carry out.

Paper ballots

The fact that they can (and should) be hand counted is itself an important property. Hand counting of ballots can be independently verified in ways that can’t be done for software. Despite not being able to trace a ballot back to its voter, there still is a paper trail of integrity for the ballots as a bulk unit.

This property leads to [software independance]: while we may use software in the process, it’s not possible for a bug in the software to cause an undetectable error in the final vote counts.

Specific vote totals for the top races

Obviousness

Figuring out what to do in the voting booth isn’t hard. You’re allowed to request assistance, but you’ll rarely have to. There are systems (like the scanner’s error checking) that are designed to ensure you don’t mess up, but the system is quite sound even without them; they just provide an additional buffer.

Compare this with the problems some Texas voting machines had last midterm. The machines were somewhat buggy, but, crucially, there was an opaque right and wrong way to use them, and some voters accidentally used it the wrong way, and then didn’t check the final page before submitting. This kind of thing should never happen in a good voting system.

It’s really important that the system is intuitive and hard to make mistakes in.

Fraud prevention

So, how is this robust against fraud?

Firstly, voter fraud isn’t a major problem in the US, and it’s often used as an excuse to propagate voter suppression tactics, which are a major problem here.

But even then, we probably want our system to be robust against fraud.

Let’s see how an individual might thwart this system. They could vote multiple times, under assumed identites. This doesn’t scale well and isn’t really worth it: to influence an election you’d need to do this many times, or have many individuals do it a couple times, and the chance of getting caught (e.g., the people who you are voting as may come by and try to vote later, throwing up a flag) and investigated scales exponentially with the number of votes. That’s not worth it at all.

Maybe poll workers could do something malicious. Poll worker manipulation would largely exist in the form of submitting extra ballots. But that’s hard because the ballot counts need to match the list of voters. So you have the same problem as individual voters committing fraud: if the actual voter shows up, they’ll notice. Poll workers could wait till the end of the day to do this, but then to make any kind of difference you’d have to do a bulk scan of ballots, and that’s very noticeable. Poll workers would have to collude to make anything like this work, and poll watchers (and government staff) may be present.

Poll workers can also discard ballots to influence an election. But you can’t do this in front of the voters, and the receptacles with non-defaced ballots are all sealed so you can’t do this when nobody is watching without having to re-seal (which means you need a new numbered seal, which the election office will notice). The scanner’s inner receptacle is opened at the end of the day but you can’t tamper with that without messing up the various counts.

Election officials have access to giant piles of ballots and could mess with things there, but I suspect poll watchers are present during the ballot sorting and counting process, and again, it’s hard to tamper with anything without messing up the various counts.

Overall, this system is pretty robust. It’s important to note that fraud prevention is achieved by more social means, not technical means: there are seals, counts, and various properties of the system, but no computers involved in any crucial roles.

Techy solutions for voting

In general, amongst the three properties of “secret ballot”, “obviousness”, and “auditable paper trail”, computer-based voting systems almost always fail at one, and usually fail at two.

A lot of naïve tech solutions for voting are explicitly designed to not have the secret ballot property: they are instead designed specifically to let voters check that what their vote was counted as after the election. As mentioned earlier, this is a problem for vote-buying and coercion.

It’s theoretically possible to have a system where you can ensure your ballot, specifically, was correctly counted after the election, without losing the secret ballot property: ThreeBallot is a cool example of such a system, though it fails the “obviousness” property.

Most systems end up not having an auditable paper trail since they rely on machines to record votes. This is vulnerable to bugs in the machine: you end up having to trust the output of the machine. Buggy/vulnerable voting machines are so common that every year at DEFCON people get together to hack the latest voting machines, and typically succeed.

Voting machines can still produce a paper trail: Voter-Verified Paper Trail systems partially succeed in doing this. They’re not as good with maintaining the “conservation of ballots” property that makes tampering much harder, and they’re not as good on the “obviousness” part since people need to check the VVPAT box for what their vote was recorded as.

Ballot-Marking devices are a bit better at this: These still produce paper ballots, it’s just that the ballot is marked by the machine on your behalf. There’s still a bit of an “obviousness” fail in that people may not double check the marked ballot, but at least there’s a nice paper trail with ballot conservation! Of course, these only work if the produced ballot is easily human-readable.

It’s not impossible to design good voting systems that rely on technology, but it’s hard to maintain the same properties you can with paper ballots. If you want to try, please keep the properties listed above in mind.

Blockchain?

Every now and then people will suggest using blockchains for voting. This is a pretty large design space, but …. most of these proposals are extremely naïve and achieve very little.

For one, most of them are of the category that lose the “secret ballot” property, instead producing some kind of identifier you’re able to check in some published blockchain. This lets you see what your vote was after the election, and as I’ve covered already that’s not a good thing.

Even if this process only lets you verify that your vote was counted (but not what it was), it typically involves some understanding of cryptography to spot-check the validity of the machine output (e.g. you need to verify that some hash is the hash of your actual vote or something). This fails the obviousness property.

Blockchains don’t really bring much to the table here. They’re decent for byzantine fault tolerance in a space without a central authority, but elections do have some form of central authority and we’re not getting rid of that. The anonymity properties of blockchains can usually be achieved without blockchains for things like elections.

There are some kinds of cryptography that can be useful for auditability — zero knowledge proofs and homomorphic encryption come to mind — but you don’t need blockchains to use these, and using these still requires some form of technology as a key part of the voting system and this makes other properties of the system harder to achieve.

Become a poll worker!

It’s still a bit early for the next election, but I highly recommend you volunteer to be a poll worker for your county if you can!

It’s really fun, you get to learn about the inner workings of voting systems, and you get to meet a lot of people!

We had a cool kid come in and more or less do this at one point

Thanks to Nika Layzell, Sunjay Varma, Jane Lusby, and Arshia Mufti for providing feedback on drafts of this blog post.

  1. Last year they required postage, but I they’ve changed that with a law this year. Yay! 

  2. Ostensibly because of fears of voter fraud, but they’re largely unfounded — in practice this just reduces turnout 

  3. I think for Alameda county the only such office is the Registrar of Voters in Oakland 

  4. The crossing-off and signing lists are different, but this isn’t too important. 

  5. I remember one particularly curmudgeonly voter loudly grumbling about all the propositions as they were voting. One doesn’t “vote” in California, one fills out social studies homework. 

  6. I don’t quite recall how the verifiability works for people using the audio unit, they may be allowed to ask someone else to verify for them? 

  7. If you vote in a different precinct, or worse, a different county, the ballot cards may not contain all the same races, so voting provisionally from the wrong district means that you only get to vote for the races common to both ballot cards. 

  8. It’s imperative that these do not go into the scanner (since that operation cannot be undone), and to prevent this poll workers are instructed to not give provisional voters a secrecy sleeve as the envelope acts as a secrecy sleeve. Whoever is supervising the scanner will only allow people with secrecy sleeves to slip their ballots into the scanner. 

  9. The exception is using the touchscreen machine, where you get to vote without using up a ballot card on voting day. However, tallies for the machine are kept separately, and I think these too are eventually turned into normal ballot cards. 

Rust Governance: Scaling Empathy

There’s been a lot of talk about improving Rust’s governance model lately. As we decompress from last year’s hectic edition work, we’re slowly starting to look at all the bits of debt we accumulated, and organizational debt is high on that list.

I’ve been talking in private with people about a bunch of these things for quite a while now, and I felt it worthwhile to write down as much of my thoughts as I can before the Rust All Hands in Berlin this week.

In the interest of brevity1 I’m going to assume the reader is roughly familiar with most of the stuff that’s happened with the Rust community in the past few years. I’m probably going to omit concrete examples of incidents, both to avoid mischaracterizing individual actions (as with most such analyses, I wish to talk in more general terms about trends), and also just because it would take me forever to write this if I were to supply all the layers of context. If you feel something is inaccurate, please let me know.

This blog post is probably going to reach the eyes of non-Rust-community members. You’re welcome to read it, but please accept my apologies in advance if it doesn’t make any sense. This is something that I initially planned to circulate as a private post (writing for a general audience is hard), but I felt this would be more widely useful. However due to time constraints I haven’t had time to edit it to make it acceptable to a wider audience.

The symptoms

Before I actually get into it, I’d like to carefully delineate what the problem is that I’m talking about. Or more accurately, the symptoms I am talking about — as I’ll explain soon I feel like these are not the actual problem but symptoms of a more general problem.

Basically, as time has gone by our decisionmaking process has become more and more arduous, both for community members and the teams. Folks have to deal with:

  • The same arguments getting brought up over and over again
  • Accusations of bad faith
  • Derailing
  • Not feeling heard
  • Just getting exhausted by all the stuff that’s going on

The RFC process is the primary exhibitor of these symptoms, but semi-official consensus-building threads on https://internals.rust-lang.org have similar problems.

Aaron has written some extremely empathetic blog posts about a bunch of these problems, starting with concrete examples and ending with a takeaway of a bunch of values for us to apply as well as thoughts on what our next steps can be. I highly recommend you read them if you haven’t already.

Fundamentally I consider our problems to be social problems, not technical ones. In my opinion, technical solutions like changing the discussion forum format may be necessary but are not sufficient for fixing this.

The scaling problem

I contend that all of these issues are symptoms of an underlying scaling issue, but also a failure of how our moderation works.

The scaling issue is somewhat straightforward. Such forum discussions are inherently N-to-N discussions. When you leave a comment, you’re leaving a comment for everyone to read and interpret, and this is hard to get right. It’s much easier to have one-on-one discussions because it’s easy to build a shared vocabulary and avoid misunderstandings. Any misunderstandings can often be quickly detected and corrected.

I find that most unpleasant technical arguments stem from an unenumerated mismatch of assumptions, or sometimes what I call a mismatch of axioms (i.e. when there is fundamental conflict between core beliefs). A mismatch of assumptions, if identified, can be resolved, leading to an amicable conclusion. Mismatches of axioms are harder to resolve, however recognizing them can take most of the vitriol out of an argument, because both parties will understand each other, even if they don’t agree. In such situations the end result may leave one or both parties unhappy, but rarely angry. (It’s also not necessary that axiom mismatches leave people unhappy, embracing positive sum thinking helps us come to mutually beneficial conclusions)

All of these mismatches are easy to identify in one-on-one discussions, because it’s easy to switch gears to the meta discussion for a bit.

One-on-one discussions are pleasant. They foster empathy.

N-to-N discussions are not. It’s harder to converge on this shared vocabulary amongst N other people. It’s harder to identify these mismatches, partly because it’s hard to switch into the meta-mode of a discussion at all, but also because there’s a lot going on. It’s harder to build empathy.

As we’ve grown, discussion complexity has grown quadratically, and we’re not really attempting to relinearize them.

Hanabi and parallel universes

I quite enjoy the game of Hanabi. It’s a game of information and trust, and I find it extremely fun, especially with the right group.

Hanabi is a cooperative game. You can see everyone’s cards (or tiles) but your own, and information-sharing is severely restricted. The goal is to play the right cards in the right order to collectively win. The gimmick is to share additional information through the side-channel of the choice of move you make.

A very common occurrence in this game is that people start making plans in their mind. You typically have a decent understanding of what information everyone has, and you can use this to make predictions as to what everyone’s moves will be. With this in mind, you can attempt to “set up” situations where the game progresses rapidly in a short period of time. This is somewhat necessary for the game to work, but a common pitfall is for these plans to be extremely elaborate, leading to frustration as the game doesn’t actually play out as planned.

The core issue behind this is forgetting that you actually can’t see the entire game state, since your own cards are hidden. It’s not just you who has plans — everyone does! And each of those plans is incomplete since they’re missing a piece of the picture, just as you are.

In Hanabi it’s very easy to forget that you’re missing a piece of the picture — in competitive card games you mostly can’t see the game state since everyone else’s cards are hidden. But in Hanabi you can see most of the cards and it’s easy to forget that your own four cards are hidden from you.

So what ends up happening is that due to incomplete information, everyone is operating in their own little parallel universe, occasionally getting frustrated when it becomes clear that other players are not operating in the same universe. As long as you recognize the existence of these parallel universes beforehand you’re fine, but if you don’t you will be frustrated.

This is largely true of N-to-N discussions as well. Because most of what’s being said makes sense to an individual in a particular way, it’s very easy for them to forget that other people may not share your assumptions and thus may be on a different page. Every time someone leaves a comment, different people may interpret it differently, “forking” the common understanding of the state of the discussion into multiple parallel universes. Eventually there are enough parallel universes that everyone’s talking past each other.

One thing I often prefer doing in such cases is to have a one on one discussion with people who disagree with me — typically the shared understanding that is the end result of such discussions is super useful and can be brought back to the discussion as something that all participants interpret the same way. I’m not consistent in doing this — in the midst of a heated argument it’s easy to get too wrapped up in the argument to think about getting results and I’ve certainly had my time arguing instead of resolving — but overall whenever I’ve chosen to do this it’s been a useful policy.

This is a good example of how relinearization and communication can help move N-to-N discussions along. Operating in different parallel universes is kind of the point of Hanabi, but it’s not the point of having a technical discussion.

The moderation problem

In a technical discussion, broadly speaking, I find that there are three kinds of comments disagreeing with you:

  • Constructive: Comments which disagree with you constructively. We’re glad these exist, disagreement can hurt but is necessary for us to collaboratively reach the best outcomes.
  • Disruptive: Comments which may be written in good faith but end up being disruptive. For example, this includes people who don’t read enough of the discussion and end up rehashing the same points. It also includes taking discussions off topic. These kinds of things are problematic but not covered by the code of conduct.
  • Abrasive: Comments which are rude/abrasive. These are covered by the code of conduct. The mod team tries to handle these.

(For a long time I and Aaron had a shared vocabulary of “Type A, B, C” for these, mostly because I’m often unimaginative when it comes to such things, thanks to Mark for coming up with, better, descriptive titles)

Note that while I’m talking about “disruptive” comments it’s not a judgement on the intent of the participants, but rather a judgement on the harm it has caused.

The second category – disruptive comments – are the thing we’re currently unable to handle well. They snowball pretty badly too — as more and more of these collect, more and more people get frustrated and in turn leave comments that cause further disruption. As the discussion progresses into more and more “parallel universes” it also just becomes easier for a comment to be disruptive.

The Rust moderation team operates mostly passively, we simply don’t have the scale2 to watch for and nip these things in the bud. Active moderation requires a degree of involvement we cannot provide. So while the best response would be to work with participants and resolve issues early as we see them crop up, we typically get pulled in at a point where some participants are already causing harm, and our response has to be more severe. It’s a bit of a catch-22: it’s not exactly our job to deal with this stuff3, but by the time it becomes our job (or even, by the time we notice), most acceptable actions for us to take are extremely suboptimal. The problem with passive moderation is that it’s largely reactive — it’s harder to proactively nudge the discussion in the right direction when you don’t even notice what’s going on until it’s too late. This is largely okay for dealing with bad-faith actors (the main goal of the mod team); it’s hard to prevent someone from deciding to harass someone else. But for dealing with disruptive buildups, we kind of need something different.

Participation guidelines

Part of the solution here is recognizing that spaces for official discussion are different from community hangout spaces. Our code of conduct attempts to handle abrasive behavior, which can disrupt discussions anywhere, but the comments that can disrupt consensusbuilding official discussions aren’t really covered. Nor are the repercussions of code of conduct violations really appropriate for such disruptive comments anyway.

A proposal I’ve circulated in the past is to have a notion of participation guidelines. Discussions in team spaces (RFCs, pre-RFCs, discord/zulip/IRC channels during team meetings) follow a set of rules set forth by the individual teams. It might be worth having a base set of participation guidelines defined by the core team. Something like the following is a very rough strawman:

  • Don’t have irrelevant discussions during team meetings on Discord/IRC/Zulip
  • Don’t take threads off topic
  • Don’t rehash discussions

We ask people to read these before participating, but also breaking these rules isn’t considered serious, it just triggers a conversation (and maybe the hiding/deletion of a comment). If someone repeatedly breaks these rules they may be asked to not participate in a given thread anymore. The primary goal here is to empower team members to better deal with disruptive comments by giving them a formalized framework. Having codified rules helps team members confidently deal with such situations without having to worry as much about drawing direct ire from affected community members.

A base participation guidelines document can also be a value statement, not just a set of rules but also set of values. These values can be things like:

  • “We explicitly value high empathy interactions”
  • “How everyone is feeling is everyone’s business”

(h/t Adam for the articulate wording here)

Having such words written somewhere — both the high level values we expect people to hold, and the individual behaviors we expect people to exhibit (or not exhibit) — is really valuable in and of itself, even if not enforced. The value of such documents is not that everyone reads them before participating — most don’t — but they serve as a good starting point for people interested in learning how to best conduct themselves, as well as an easy place to point people to where they’re having trouble doing so.

On its own, I find that this is a powerful framework but may not achieve the goal of improving the situation. I recently realized that this actually couples really well with a different idea I’ve been talking about for quite a while now, the idea of having facilitators:

Facilitators

A common conflict I see occurring is that in many cases it’s a team’s job to think about and opine on a technical decision, but it’s also the team’s job to shepherd the discussion for that decision. This often works out great, but it also leads to people just feeling unheard. It kinda hurts when someone who has just strongly disagreed with you goes on to summarize the state of the discussion in a way that you feel you’ve been unfairly represented. The natural response to that for most people isn’t to work with that person and try to be properly represented, it’s to just get angry, leading to less empathy over time.

By design, Rust team members are partisan. The teams exist to build well-informed, carefully crafted opinions, and present them to the community. They also exist to make final decisions based on the results of a consensusbuilding discussion, which can involve picking sides. This is fine, there is always going to be some degree of partisanship amongst decisionmakers, or decisions would not get made.

Having team members also facilitate discussions is somewhat at odds with all of this. Furthermore, I feel like they don’t have enough bandwidth to do this well anyway. Some teams do have a concept of “sheriffs”, but this is more of an onramp to full team membership and the role of a sheriff is largely the same as the role of a team member, just without a binding vote.

I feel like it would be useful to have a group of (per-team?) facilitators to help with this. Facilitators are people who are interested in seeing progress happening, and largely don’t have much of an opinion on a given discussion, or are able to set aside this opinion in the interest of moving a discussion forward. They operate largely at the meta level of the discussion. Actions they may take are:

  • Summarizing the discussion every now and then
  • Calling out one sided discussions
  • Encouraging one-on-one tangents to be discussed elsewhere (perhaps creating a space for them, like an issue)
  • Calling out specific people to do a thing that helps move the discussion forward. For example, something like “hey @Manishearth, I noticed you’ve been vocal in arguing that Rust should switch to whitespace-sensitive syntax, could you summarize all the arguments made by people on your side?” would help.
  • Reinforcing positive behavior
  • Occasionally pinging participants privately to help them improve their comments
  • Attempting to identify the root cause of a disagreement, or empowering people to work together to identify this. This one is important but tricky. I’ve often enjoyed doing it — noticing the core axiomatic disagreement at play and spelling it out is a great feeling. But I’ve also found that it’s incredibly hard to do when you’re emotionally involved, and I’ve often needed a nudge from someone else to get there.

At a high level, the job of the facilitators is to:

  • help foster empathy between participants
  • help linearize complex discussions
  • nudge towards cooperative behavior, away from adversarial behavior. Get people playing not to win, but to win-win.

It’s important to note that facilitators don’t make decisions — the team does. In fact, they almost completely avoid making technical points, they instead keep their comments largely at the meta level, perhaps occasionally making factual corrections.

The teams could do most of this themselves4, but as I’ve mentioned before it’s harder for others to not perceive all of your actions as partisan when some of them are. Furthermore, it can come off as patronizing at times.

This is also something the moderation team could do, however it’s much harder to scale the moderation team this way. Given that the moderation team deals with harassment and stuff like that, we need to be careful about how we build it up. On the other hand facilitating discussions is largely a public task, and the stakes aren’t as high: screwups can get noticed, and they don’t cause much harm. As a fundamentally proactive moderation effort, most actions taken will be to nudge things in a positive direction; getting this wrong usually just means that the status quo is maintained, not that harm is caused. Also, from talking to people it seems that while very few people want to be involved in moderating Rust, this notion of facilitating sounds much more fun and rewarding (I’d love to hear from people who would like to help).

And to me, this pairs really well with the idea of participation guidelines: teams can write down how they want discussions to take place on their venues, and facilitators can help ensure this works out. It’s good to look at the participation guidelines less as a set of rules and more as an aspiration for how we conduct ourselves, with the facilitators as a means to achieving that goal.

There are a lot of specifics we can twiddle with this proposal. For example, we can have a per-team group of appointed facilitators (with no overlap with the team), and for a given discussion one facilitator is picked (if they don’t have time or feel like they have strong opinions, try someone else). But there’s also no strong need for there to be such a group, facilitators can be picked as a discussion is starting, too. I don’t expect most discussions to need facilitators, so this is mostly reserved for discussions we expect will get heated, or discussions that have started to get heated. I’m not really going to spend time analysing these specifics; I have opinions but I’d rather have us figure out if we want to do something like this and how before getting into the weeds.

Prospective outcomes

The real goal here is to bootstrap better empathy within the community. In an ideal world we don’t need facilitators, instead everyone is able to facilitate well. The explicitly non-partisan nature of facilitators is useful, but if everyone was able to operate in this manner it would largely be unnecessary. But as with any organization, being able to horizontally scale specific skills is really tricky without specialization.

I suspect that in the process of building up such a team of facilitators, we will also end up building a set of resources that can help others learn to act the same way, and eventually overall improve how empathetic our community is.

The concept of facilitators directly addresses the moderation problem, but it also handles the scaling problem pretty well! Facilitators are key in re-linearizing the n-to-n discussions, bringing the “parallel universes” together again. This should overall help people (especially team members) who are feeling overwhelmed by all the things that are going on.

This also helps with concerns people have that they’re not getting heard, as facilitators are basically posed as allies on all sides of the argument; people whose primary goal is to help communication happen.


Overall what I’ve proposed here isn’t a fully-formed idea; but it’s the seed of one. There are a lot of interesting bits to discuss and build upon. I’m hoping through this post we might push forward some of the discussions about governance — both by providing a strawman idea, as well as by providing a perspective on the problem that I hope is useful.

I’m really interested to hear what people think!

Thanks to Aaron, Ashley, Adam, Corey, Arshia, Michael, Sunjay, Nick and other people I’ve probably forgotten for having been part of these discussions with me over the last few years, helping me refine my thoughts

  1. I am way too verbose for “brief” to be an accurate description of anything I write, but might as well try

  2. Scaling the moderation team properly is another piece of this puzzle that I’m working on; we’ve made some progress recently. 

  3. I helped draft our moderation policy, so this is a somewhat a lack of foresight on my part, but as I’ll explain later it’s suboptimal for the mod team to be dealing with this anyway. 

  4. In particular, I feel like Aaron has done an excellent and consistent job of facilitating discussions this way in many cases. 

Why I Enjoy Blogging

See also: Alex’s version of this blog post

I started this blog three years ago, moving from my older blog, hoping to written about programming, math, physics, books, and miscellenia. I’ve not quite written about everything I wanted to, but I’ve been very very happy with the experience of blogging. wc says I’ve written almost 75k words, which is mind-boggling to me!

I often get asked by others — usually trying to decide if they should start blogging — what it’s like. I also often try to convince friends to blog by enumerating why I think it’s awesome. Might as well write it down so that it’s generally useful for everyone! 😃

Blogging helps cement my understanding of things!

I’ve often noticed that I’ll start blogging about something I think I understand, and it turns out that my understanding of the subject was somewhat nebulous. Turns out it’s pretty easy to convince ourselves that we understand something.

The act of writing stuff down helps cement my own understanding — words are usually not as nebulous as thoughts so I’m forced to figure out little details.

I recall when I wrote my post on how Rust’s thread safety guarantees work, I thought I understood Send and Sync in Rust. I understood what they did, but I didn’t have a clear mental model for them. I obtained this mental model through the process of writing the post; to be able to explain it to others I had to first explain it to myself.

I point out this post in particular because this was both one of the first posts for me where I’d noticed this, and, more importantly, my more concrete mental model led to me finding a soundness bug in Rust’s standard library. When I was thinking about my mental model I realized “an impl that looks like this should never exist”, so I grepped the source code and found one1.

I’ve even noticed a difference between one-on-one explaining and explaining things through blog posts. I love explaining things one-on-one, it’s much easier to tailor the explanation to the other person’s background, as well as what they’re actually asking for help with. Plus, it’s interactive. A lot of my posts are of the “okay I get this question a lot I’m going to write down the answer so I don’t have to repeat myself” kind and I’ve found that I’ve often learned things from these despite having talked about the thing in the article contents multiple times.

I guess it’s basically that blogging is inherently one-many — you’re trying to explain to a whole group of people with varied backgrounds — which means you need to cover all your bases2 and explain everything together instead of the minimum necessary.

It’s really fun to revisit old posts!

Okay, I’ll admit that I never really write blog posts with this in mind. But when I do reread them, I’m usually quite thankful I wrote them!

I’m a fan of rereading in general, I’ve reread most of my favorite books tens of times; I make a yearly pilgrimage to James Mickens’ website; I reread many of my favorite posts and articles on the internet; and I often reread my own posts from the past.

Sometimes I’ll do it because I want a refresher in a topic. Sometimes I’ll do it because I’m bored. Whatever the reason, it’s always been a useful and fun thing to do.

Rereading old posts is a great way to transport myself back to my mindset from when I wrote the post. It’s easy to see progress in my understanding of things as well as in my writing. It’s interesting to note what I thought super important to include in the post then that I consider totally obvious now3. It’s interesting to relearn what I’ve forgotten. It’s reassuring to realize that my terrible jokes were just as terrible as they are now.

One of my favorite posts to reread is this really long one on generalized zero knowledge proofs. It’s the longest post I’ve written so far4, and it’s on a topic I don’t deal with often — cryptography. Not only does it help put me back in a mindset for thinking about cryptography, it’s about something super interesting but also complicated enough that rereading the post is like learning it all over again.

It lets me exercise a different headspace!

I like programming a lot, but if programming was all I did, I’d get tired pretty quickly. When I was a student learning physics I’d often contribute to open source in my spare time, but now I write code full time so I’m less inclined to do it in my free time5.

But I still sometimes feel like doing programmery things in my spare time just … not programming.

Turns out that blogging doesn’t tire me out the same way! I’m sure that if I spent the whole day writing I’d not want to write when I go home, but I don’t spend the whole day writing, so it’s all good. It’s refreshing to sit down to write a blog post and discover a fresh reserve of energy. I’m not sure if this is the right term, but I usually call this “using a different headspace”.

I’ve also started using this to plan my work, I mix up the kinds of headspace I’m employing for various tasks so that I feel energetic throughout the day.

This is also why I really enjoy mentoring — mentoring often requires the same effort from me as fixing it myself, but it’s a different headspace I’m employing so it’s less tiring.

Blogging lets me be lazy!

I often find myself explaining things often. I like helping folks and explaining things, but I’m also lazy6, so writing stuff down really makes stuff easy for me! If folks ask me a question I can give a quick answer and then go “if you want to learn more, I’ve written about it here!”. If folks are asking a question a lot, there’s probably something missing in the documentation or learning materials about it. Some things can be fixed upstream in documentation, but other things — like “how should I reason about modules in Rust?” deserve to be tackled as a separate problem and addressed with their own post.

(Yes, this post is in this category!)

It’s okay if folks have written about it before!

A common question I’ve gotten is “Well I can write about X but … there are a lot of other posts out there about it, should I still?”

Yes!!

People think differently, people learn differently, and people come from different backgrounds. Existing posts may be useful for some folks but less useful for others.

My personal rule of thumb is that if it took me some effort to understand something after reading about it, that’s something worth writing about, so it’s easier to understand for others like me encountering the subject.

One of my favorite bloggers, Julia Evans very often writes posts explaining computer concepts. Most of the times these have been explained before in other blog posts or manuals. But that doesn’t matter — her posts are different, and they’re amazing. They’re upbeat, fun to read, and often get me excited to learn more about things I knew about but never really looked at closely before.

I kinda feel it’s my duty to?

There’s a quote by Toni Morrison I quite enjoy:

I tell my students, ‘When you get these jobs that you have been so brilliantly trained for, just remember that your real job is that if you are free, you need to free somebody else. If you have some power, then your job is to empower somebody else. This is not just a grab-bag candy game.

I enjoy it so much I concluded my talk at RustFest Kyiv with it!

I have the privilege of having time to do things like blogging and mentoring. Given that, I feel that it really is my duty to share what I know as much as possible; to help others attempting to tread the path I’m treading; and to battle against tribal knowledge.

When it comes to programming I’m mostly “self-taught”. But when I say that, I really mean that I wasn’t taught in a traditional way by other humans — I learned things by trying stuff out and reading what others had written. I didn’t learn Rust by taking rustc and pretending to be a fuzzer and just trying random nonsense till stuff made sense, I went through the tutorial (and then started exploring by trying random stuff). I didn’t figure out cool algorithms by discovering them from first principles, I picked them up from books and blog posts. I’m “self-taught” because I’ve been in charge of my learning process, but I’ve definitely relied on the work of other people throughout this process.

This means that for me, personally, knowledge-sharing is especially important. If I had to spend time figuring something out, I should make it easier for the next people to try7.

(All this said, I probably don’t blog as much as I should)

You should blog too!

I wish everyone wrote more. I know not everyone has the time/privilege to do this, but if you do, I urge you to start!

I feel like tips on how to blog would fill up an entire other blog post, but Julia Evans has multiple posts on this that I strongly recommend. Feel free to ask me for review on posts!

As for the technicalities of setting up a blog, my colleague Emily recently wrote a great post about doing this with Jekyll. This blog uses Octopress which is similar to set up.

Thanks to Arshia, QuietMisdreavus, and Alex for reviewing drafts of this blog post.

  1. Who needs to look for unsoundness with rigorous formal verification when you have grep

  2. Incidentally, I find there’s a similar dynamic when it comes to forum discussions vs hashing things out one-on-one, it’s way harder to get anywhere with forum discussions because they’re one-many and you have to put in that much more work to empathize with everyone else and also phrase things in a way that is resilient to accidental misinterpretation. 

  3. This is especially important as I get more and more “used” to subjects I’m familiar with – it’s easy to lose the ability to explain things when I think half of it is obvious. 

  4. This is probably the real reason I love rereading it — I like being verbose and would nest parentheses and footnotes if society let me 

  5. I also am in general less inclined to do technical things in my free time and have a better work-life balance, glad that worked out! 

  6. See blog title 

  7. One of my former title ideas for this post was “Knowledge is Theft”, riffing off of this concept, but I felt that was a bit too tongue-in-cheek. 

The Future of Clippy

We’ve recently been making lots of progress on future plans for clippy and I thought I’d post an update.

For some background, Clippy is the linter for Rust. We have more than 250 lints, and are steadily growing.

Clippy and Nightly

Sadly, Clippy has been nightly-only for a very long time. The reason behind this is that to perform its analyses it hooks into the compiler so that it doesn’t have to reimplement half the compiler’s info to get things like type information. But these are internal APIs and as such will never stabilize, so Clippy needs to be used with nightly Rust.

We’re hoping this will change soon! The plan is that Clippy will eventually be distributed by Rustup, so something like rustup component add clippy will get you the clippy binary.

The first steps are happening, we’re planning on setting it up so that when it compiles Rustup will be able to fetch a clippy component (however this won’t be the recommended way to use clippy until we figure out the workflow here, so sit tight!)

Eventually, clippy will probably block nightlies1; and after a bunch of cycles of letting that work itself out, hopefully clippy will be available with the stable compiler. There’s a lot of stuff that needs to be figured out, and we want to do this in a way that minimally impacts compiler development, so this may move in fits and starts.

Lint audit

A couple months ago Oliver and I2 did a lint audit in Clippy. Previously, clippy lints were classified as simply “clippy”, “clippy_pedantic”, and “restriction”. “restriction” was for allow-by-default lints for things which are generally not a problem but may be something you specifically want to forbid based on the situation, and “pedantic” was for all the lints which were allow-by-default for other reasons.

Usually these reasons included stuff like “somewhat controversial lint”, “lint is very buggy”, or for lints which are actually exceedingly pedantic and may only be wanted by folks who very seriously prefer their code to be perfect.

We had a lot of buggy lints, and these categories weren’t as helpful. People use clippy for different reasons. Some folks only care about clippy catching bugs, whereas others want its help enforcing the general “Rust Style”.

So we came up with a better division of lints:

  • Correctness (Deny): Probable bugs, e.g. calling .clone() on &&T, which clones the (Copy) reference and not the actual type
  • Style (Warn): Style issues; where the fix usually doesn’t semantically change the code. For example, having a method named into_foo() that doesn’t take self by-move
  • Complexity (Warn): For detecting unnecessary code complexities and helping simplify them. For example, replacing .filter(..).next() with .find(..)
  • Perf (Warn): Detecting potential performance footguns, like using Box<Vec<T>> or calling .or(foo()) instead of or_else(foo).
  • Pedantic (Allow): Controversial or exceedingly pedantic lints
  • Nursery (Allow): For lints which are buggy or need more work
  • Cargo (Allow): Lints about your Cargo setup
  • Restriction (Allow): Lints for things which are not usually a problem, but may be something specific situations may dictate disallowing.

and applied it to the codebase. You can see the results on our lint list

Some lints could belong in more than one group, and we picked the best one in that case. Feedback welcome!

Clippy 1.0

In the run up to making Clippy a rustup component we’d like to do a 1.0 release of Clippy. This involves an RFC, and pinning down an idea of stability.

The general plan we have right now is to have the same idea of lint stability as rustc; essentially we do not guarantee stability under #[deny(lintname)]. This is mostly fine since deny only affects the current crate (dependencies have their lints capped) so at most you’ll be forced to slap on an allow somewhere after a rustup.

With specifics, this means that we’ll never remove lints. We may recategorize them, or “deprecate” them (which makes the lint do nothing, but keeps the name around so that #[allow(lintname)] doesn’t break the build aside from emitting a warning).

We’ll also not change what individual lints do fundamentally. The kinds of changes you can expect are:

  • Entirely new lints
  • Fixing false positives (a lint may no longer lint in a buggy case)
  • Fixing false negatives (A case where the lint should be linting but doesn’t is fixed)
  • Bugfixes (When the lint panics or does something otherwise totally broken)

When fixing false negatives this will usually be fixing things that can be understood as comfortably within the scope of the lint as documented/named

I’ll be posting an RFC soonish that both contains this general plan of stability, as well as a list of the current lint categorization for folks to discuss.


Anyway, thought I’d just post a general update on everything, since stuff’s changing quickly.

There’s still time for stable or even just reliably rustuppable nightly clippy to happen but the path to it is pretty clear now!

  1. As in, if clippy is broken there will not be a nightly that day. Rustfmt and RLS work this way right now AIUI. 

  2. Okay, mostly Oliver 

Down a Rusty Rabbit Hole

Last week I fell down a rather interesting rabbit hole in Rust, which was basically me discovering a series of quirks of the Rust compiler/language, each one leading to the next when I asked “why?”.

It started when someone asked why autogenerated Debug impls use argument names like __arg_0 which start with a double underscore.

This happened to be my fault. The reason we used a double underscore was that while a single underscore tells rustc not to warn about a possibly-unused variable, there’s an off- by-default clippy lint that warns about variables that start with a single underscore that are used, which can be silenced with a double underscore. Now, the correct fix here is to make the lint ignore derive/macros (which I believe we did as well), but at the time we needed to add an underscore anyway so a double underscore didn’t seem worse.

Except of course, this double underscore appears in the docs. Oops.

Ideally the rustc derive infrastructure would have a way of specifying the argument name to use so that we can at least have descriptive things here, but that’s a bit more work (I’m willing to mentor this work though!). So I thought I’d fix this by at least removing the double underscore, and making the unused lint ignore #[derive()] output.

While going through the code to look for underscores I also discovered a hygiene issue. The following code throws a bunch of very weird type errors:

pub const __cmp: u8 = 1;

#[derive(PartialOrd, PartialEq)]
pub enum Foo {
    A(u8), B(u8)
}

(playpen)

error[E0308]: mismatched types
 --> src/main.rs:6:7
  |
6 |     A(u8), B(u8)
  |       ^^^ expected enum `std::option::Option`, found u8
  |
  = note: expected type `std::option::Option<std::cmp::Ordering>`
             found type `u8`
.....

This is because the generated code for PartialOrd contains the following:

match foo.cmp(bar) {
    Some(Ordering::Equal) => .....,
    __cmp => __cmp,
}

__cmp can both be a binding to a wildcard pattern match as well as a match against a constant named __cmp, and in the presence of such a constant it resolves to the constant, causing type errors.

One way to fix this is to bind foo.cmp(bar) to some temporary variable x and use that directly in a _ => x branch.

I thought I could be clever and try cmp @ _ => cmp instead. match supports syntax where you can do foo @ <pattern>, where foo is bound to the entire matched variable. The cmp here is unambiguously a binding; it cannot be a pattern. So no conflicting with the const, problem solved!

So I made a PR for both removing the underscores and also fixing this. The change for __cmp is no longer in that PR, but you can find it here.

Except I hit a problem. With that PR, the following still breaks:

pub const cmp: u8 = 1;

#[derive(PartialOrd, PartialEq)]
pub enum Foo {
    A(u8), B(u8)
}

throwing a slightly cryptic error:

error[E0530]: match bindings cannot shadow constants
 --> test.rs:9:7
  |
4 | pub const cmp: u8 = 1;
  | ---------------------- a constant `cmp` is defined here
...
9 |     B(u8)
  |       ^^^ cannot be named the same as a constant

You can see a reduced version of this error in the following code:

pub const cmp : u8 = 1;

fn main() {
    match 1 {
        cmp @ _ => ()
    }
}

(playpen)

Huh. Wat. Why? cmp @ _ seems to be pretty unambiguous, what’s wrong with it shadowing a constant?

Turns out bindings cannot shadow constants at all, for a rather subtle reason:

const A: u8 = ...; // A_const
let A @ _ = ...; // A_let
match .. {
    A => ...; // A_match
}

What happens here is that constants and variables occupy the same namespace. So A_let shadows A_const here, and when we attempt to match, A_match is resolved to A_let and rejected (since you can’t match against a variable), and A_match falls back to resolving as a fresh binding pattern, instead of resolving to a pattern that matches against A_const.

This is kinda weird, so we disallow shadowing constants with variables. This is rarely a problem because variables are lowercase and constants are uppercase. We could technically allow this language-wise, but it’s hard on the implementation (and irrelevant in practice) so we don’t.


So I dropped that fix. The temporary local variable approach is broken as well since you can also name a constant the same as the local variable and have a clash (so again, you need the underscores to avoid surprises).

But then I realized that we had an issue with removing the underscores from __arg_0 as well.

The following code is also broken:

pub const __arg_0: u8 = 1;

#[derive(Debug)]
struct Foo(u8);

(playpen)

error[E0308]: mismatched types
 --> src/main.rs:3:10
  |
3 | #[derive(Debug)]
  |          ^^^^^ expected mutable reference, found u8
  |
  = note: expected type `&mut std::fmt::Formatter<'_>`
             found type `u8`

You can see a reduced version of this error in the following code:

pub const __arg_0: u8 = 1;

fn foo(__arg_0: bool) {}
error[E0308]: mismatched types
 --> src/main.rs:3:8
  |
3 | fn foo(__arg_0: bool) {}
  |        ^^^^^^^ expected bool, found u8

(playpen)

This breakage is not an issue with the current code because of the double underscores – there’s a very low chance someone will create a constant that is both lowercase and starts with a double underscore. But it’s a problem when I remove the underscores since that chance shoots up.

Anyway, this failure is even weirder. Why are we attempting to match against the constant in the first place? fn argument patterns1 are irrefutable, i.e. all possible values of the type should match the argument. For example, fn foo(Some(foo): Option<u8>) {} will fail to compile with “refutable pattern in function argument: None not covered”.

There’s no point trying to match against constants here; because even if we find a constant it will be rejected later. Instead, we can unambiguously resolve identifiers as new bindings, yes?

Right?

Firm in my belief, I filed an issue.

I was wrong, it’s not going to always be rejected later. With zero-sized types this can totally still work:

struct S;

const C: S = S;

fn main() {
    let C = S;
}

Here because S has only one state, matching against a constant of the type is still irrefutable.

I argued that this doesn’t matter – since the type has a single value, it doesn’t matter whether we resolved to a new binding or the constant; the value and semantics are the same.

This is true.

Except.

Except for when destructors come in.

It was at this point that my table found itself in the perplexing state of being upside-down.

This is still really fine, zero-sized-constants-with-destructors is a pretty rare thing in Rust and I don’t really see folks relying on this behavior.

However I later realized that this entire detour was pointless because even if we fix this, we end up with a way for bindings to shadow constants. Which … which we already realized isn’t allowed by the compiler till we fix some bugs.

Damn.


The actual fix to the macro stuff is to use hygenic generated variable names, which the current infrastructure supports. I plan to make a PR for this eventually.

But it was a very interesting dive into the nuances of pattern matching in Rust.

  1. Yes, function arguments in Rust are patterns. You can totally do things like (a, b): (u8, u8) in function arguments (like you can do in let

Picking Apart the Crashing iOS String

So there’s yet another iOS text crash, where just looking at a particular string crashes iOS. Basically, if you put this string in any system text box (and other places), it crashes that process. I’ve been testing it by copy-pasting characters into Spotlight so I don’t end up crashing my browser.

The original sequence is U+0C1C U+0C4D U+0C1E U+200C U+0C3E, which is a sequence of Telugu characters: the consonant ja (జ), a virama ( ్ ), the consonant nya (ఞ), a zero-width non-joiner, and the vowel aa ( ా).

I was pretty interested in what made this sequence “special”, and started investigating.

So first when looking into this, I thought that the <ja, virama, nya> sequence was the culprit. That sequence forms a special ligature in many Indic scripts (ज्ञ in Devanagari) which is often considered a letter of its own. However, the ligature for Telugu doesn’t seem very “special”.

Also, from some experimentation, this bug seemed to occur for any pair of Telugu consonants with a vowel, as long as the vowel is not   ై (ai). Huh.

The ZWNJ must be doing something weird, then. <consonant, virama, consonant, vowel> is a pretty common sequence in any Indic script; but ZWNJ before a vowel isn’t very useful for most scripts (except for Bengali and Oriya, but I’ll get to that).

And then I saw that there was a sequence in Bengali that also crashed.

The sequence is U+09B8 U+09CD U+09B0 U+200C U+09C1, which is the consonant “so” (স), a virama ( ্ ), the consonant “ro” (র), a ZWNJ, and vowel u (  ু).

Before we get too into this, let’s first take a little detour to learn how Indic scripts work:

Indic scripts and consonant clusters

Indic scripts are abugidas; which means that their “letters” are consonants, which you can attach diacritics to to change the vowel. By default, consonants have a base vowel. So, for example, क is “kuh” (kə, often transcribed as “ka”), but I can change the vowel to make it के (the “ka” in “okay”) का (“kaa”, like “car”).

Usually, the default vowel is the ə sound, though not always (in Bengali it’s more of an o sound).

Because of the “default” vowel, you need a way to combine consonants. For example, if you wished to write the word “ski”, you can’t write it as स + की (sa + ki = “saki”), you must write it as स्की. What’s happened here is that the स got its vowel “killed”, and got tacked on to the की to form a consonant cluster ligature.

You can also write this as स्‌की . That little tail you see on the स is known as a “virama”; it basically means “remove this vowel”. Explicit viramas are sometimes used when there’s no easy way to form a ligature, e.g. in ङ्‌ठ because there is no simple way to ligatureify ङ into ठ. Some scripts also prefer explicit viramas, e.g. “ski” in Malayalam is written as സ്കീ, where the little crescent is the explicit virama.

In unicode, the virama character is always used to form a consonant cluster. So स्की was written as <स,  ्, क,  ी>, or <sa, virama, ka, i>. If the font supports the cluster, it will show up as a ligature, otherwise it will use an explicit virama.

For Devanagari and Bengali, usually, in a consonant cluster the first consonant is munged a bit and the second consonant stays intact. There are exceptions – sometimes they’ll form an entirely new glyph (क + ष = क्ष), and sometimes both glyphs will change (ड + ड = ड्ड, द + म = द्म, द + ब = द्ब). Those last ones should look like this in conjunct form:

Investigating the Bengali case

Now, interestingly, unlike the Telugu crash, the Bengali crash seemed to only occur when the second consonant is র (“ro”). However, I can trigger it for any choice of the first consonant or vowel, except when the vowel is  ো (o) or  ৌ (au).

Now, র is an interesting consonant in some Indic scripts, including Devanagari. In Devanagari, it looks like र (“ra”). However, it does all kinds of things when forming a cluster. If you’re having it precede another consonant in a cluster, it forms a little feather-like stroke, like in र्क (rka). In Marathi, that stroke can also look like a tusk, as in र्‍क. As a suffix consonant, it can provide a little “extra leg”, as in क्र (kra). For letters without a vertical stroke, like ठ (tha), it does this caret-like thing, ठ्र (thra).

Basically, while most consonants retain some of their form when put inside a cluster, र does not. And a more special thing about र is that this happens even when र is the second consonant in a cluster – as I mentioned before, for most consonant clusters the second consonant stays intact. While there are exceptions, they are usually specific to the cluster; it is only र for which this happens for all clusters.

It’s similar in Bengali, র as the second consonant adds a tentacle-like thing on the existing consonant. For example, প + র (po + ro) gives প্র (pro).

But it’s not just র that does this in Bengali, the consonant “jo” does as well. প + য (po + jo) forms প্য (pjo), and the য is transformed into a wavy line called a “jophola”.

So I tried it with য — , and it turns out that the Bengali crash occurs for য as well! So the general Bengali case is <consonant, virama, র OR য, ZWNJ, vowel>, where the vowel is not  ো or  ৌ.

Suffix-joining consonants

So we’re getting close, here. At least for Bengali, it occurs when the second consonant is such that it often combines with the first consonant without modifying its form much.

In fact, this is the case for Telugu as well! Consonant clusters in Telugu are usually formed by preserving the original consonant, and tacking the second consonant on below!

For example, the original crashy string contains the cluster జ + ఞ, which looks like జ్ఞ. The first letter isn’t really modified, but the second is.

From this, we can guess that it will also occur for Devanagari with र. Indeed it does! U+0915 U+094D U+0930 U+200C U+093E, that is, <क,  ्, र, zwnj,  ा> (< ka, virama, ra, zwnj, aa >) is one such crashing sequence.

But this isn’t really the whole story, is it? For example, the crash does occur for “kro” + zwnj + vowel in Bengali, and in “kro” (ক্র = ক + র = ko + ro) the resultant cluster involves the munging of both the prefix and suffix. But the crash doesn’t occur for द्ब or ड्ड. It seems to be specific to the letter, not the nature of the cluster.

Digging deeper, the reason is that for many fonts (presumably the ones in use), these consonants form “suffix joining consonants”1 (a term I made up) when preceded by a virama. This seems to correspond to the pstf OpenType feature, as well as vatu.

For example, the sequence virama + क gives   ्क, i.e. it renders a virama with a placeholder followed by a क.

But, for र, virama + र renders  ्र, which for me looks like this:

In fact, this is the case for the other consonants as well. For me,  ्र  ্র  ্য  ్ఞ  ్క (Devanagari virama-ra, Bengali virama-ro, Bengali virama-jo, Telugu virama-nya, Telugu virama-ka) all render as “suffix joining consonants”:

(This is true for all Telugu consonants, not just the ones listed).

An interesting bit is that the crash does not occur for <र, virama, र, zwnj, vowel>, because र-virama-र uses the prefix-joining form of the first र (र्र). The same occurs for র with itself or ৰ or য. Because the virama is “stickier” to the left in these cases, it doesn’t cause a crash. (h/t hackbunny for discovering this using a script to enumerate all cases).

Kannada also has “suffix joining consonants”, but for some reason I cannot trigger the crash with it. Ya in Gurmukhi is also suffix-joining.

The ZWNJ

The ZWNJ is curious. The crash doesn’t happen without it, but as I mentioned before a ZWNJ before a vowel doesn’t really do anything for most Indic scripts. In Indic scripts, a ZWNJ can be used to explicitly force a virama if used after the virama (I used it to write स्‌की in this post), however that’s not how it’s being used here.

In Bengali and Oriya specifically, a ZWNJ can be used to force a different vowel form when used before a vowel (e.g. রু vs র‌ু), however this bug seems to apply to vowels for which there is only one form, and this bug also applies to other scripts where this isn’t the case anyway.

The exception vowels are interesting. They’re basically all vowels that are made up of two glyph components. Philippe Verdy points out:

And why this bug does not occur with some vowels is because these are vowels in two parts, that are first decomposed into two separate glyphs reordered in the buffer of glyphs, while other vowels do not need this prior mapping and keep their initial direct mapping from their codepoints in fonts, which means that this has to do to the way the ZWNJ looks for the glyphs of the vowels in the glyphs buffer and not in the initial codepoints buffer: there’s some desynchronization, and more probably an uninitialized data field (for the lookup made in handling ZWNJ) if no vowel decomposition was done (the same data field is correctly initialized when it is the first consonnant which takes an alternate form before a virama, like in most Indic consonnant clusters, because the a glyph buffer is created.

Generalizing

So, ultimately, the full set of cases that cause the crash are:

Any sequence <consonant1, virama, consonant2, ZWNJ, vowel> in Devanagari, Bengali, and Telugu, where:

  • consonant2 is suffix-joining (pstf/vatu) – i.e. र, র, য, ৰ, and all Telugu consonants
  • consonant1 is not a reph-forming letter like र/র (or a variant, like ৰ)
  • vowel does not have two glyph components, i.e. it is not   ై,   ো, or   ৌ

This leaves one question open:

Why doesn’t it apply to Kannada? Or, for that matter, Khmer, which has a similar virama-like thing called a “coeng”?

Are these valid strings?

A recurring question I’m getting is if these strings are valid in the language, or unicode gibberish like Zalgo text. Breaking it down:

  • All of the rendered glyphs are valid. The original Telugu one is the root of the word for “knowledge” (and I’ve taken to calling this bug “forbidden knowledge” for that reason).
  • In Telugu and Devanagari, there is no functional use of the ZWNJ as used before a vowel. It should not be there, and one would not expect it in typical text.
  • In Bengali (also Oriya), putting a ZWNJ before some vowels prevents them from ligatureifying, and this is mentioned in the Unicode spec. However, it seems rare for native speakers to use this.
  • In all of these scripts, putting a ZWNJ after viramas can be used to force an explicit virama over a ligature. That is not the position ZWNJ is used here, but it gives a hint that this might have been a mistype. Doing this is also rare at least for Devanagari (and I believe for the other two scripts as well)
  • Android has an explicit key for ZWNJ on its keyboards for these languages2, right next to the spacebar. iOS has this as well on the long-press of the virama key. Very easy to mistype, at least for Android.

So while the crashing strings are usually invalid, and when not, very rare, they are easy enough to mistype.

An example by @FakeUnicode was the string “For/k” (or “Foŕk”, if accents were easier to type). A slash isn’t something you’d normally type there, and the produced string is gibberish, but it’s easy enough to type by accident.

Except of course that the mistake in “For/k”/”Foŕk” is visually obvious and would be fixed; this isn’t the case for most of the crashing strings.

Conclusion

I don’t really have one guess as to what’s going on here – I’d love to see what people think – but my current guess is that the “affinity” of the virama to the left instead of the right confuses the algorithm that handles ZWNJs after viramas into thinking the ZWNJ applies to the virama (it doesn’t, there’s a consonant in between), and this leads to some numbers not matching up and causing a buffer overflow or something. Philippe’s diagnosis of the vowel situation matches up with this.

An interesting thing is that I can cause this crash to happen more reliably in browsers by clicking on the string.

Additionally, sometimes it actually renders in spotlight for a split second before crashing; which means that either the crash isn’t deterministic, or it occurs in some process after rendering. I’m not sure what to think of either. Looking at the backtraces, the crash seems to occur in different places, so it’s likely that it’s memory corruption that gets uncovered later.

I’d love to hear if folks have further insight into this.

Update: Philippe on the Unicode mailing list has an interesting theory

Yes, I could attach a debugger to the crashing process and investigate that instead, but that’s no fun 😂

  1. Philippe Verdy points out that these may be called “phala forms” at least for Bengali 

  2. I don’t think the Android keyboard needs this key; the keyboard seems very much a dump of “what does this unicode block let us do”, and includes things like Sindhi-specific or Kashmiri-specific characters for the Marathi keyboard as well as extremely archaic characters, whilst neglecting more common things like the eyelash reph (which doesn’t have its own code point but is a special unicode sequence; native speakers should not be expected to be aware of this sequence). 

A Rough Proposal for Sum Types in Go

Sum types are pretty cool. Just like how a struct is basically “This contains one of these and one of these”, a sum type is “This contains one of these or one of these”.

So for example, the following sum type in Rust:

enum Foo {
    Stringy(String),
    Numerical(u32)
}

or Swift:

enum Foo {
    case stringy(String),
    case numerical(Int)
}

would be one where it’s either Foo::Stringy (Foo::stringy for swift), containing a String, or Foo::Numerical, containing an integer.

This can be pretty useful. For example, messages between threads are often of a “this or that or that or that” form.

The nice thing is, matching (switching) on these enums is usually exhaustive – you must list all the cases (or include a default arm) for your code to compile. This leads to a useful component of type safety – if you add a message to your message passing system, you’ll know where to update it.

Go doesn’t have these. Go does have interfaces, which are dynamically dispatched. The drawback here is that you do not get the exhaustiveness condition, and consumers of your library can even add further cases. (And, of course, dynamic dispatch can be slow). You can get exhaustiveness in Go with external tools, but it’s preferable to have such things in the language IMO.

Many years ago when I was learning Go I wrote a blog post about what I liked and disliked as a Rustacean learning Go. Since then, I’ve spent a lot more time with Go, and I’ve learned to like each Go design decision that I initially disliked, except for the lack of sum types. Most of my issues arose from “trying to program Rust in Go”, i.e. using idioms that are natural to Rust (or other languages I’d used previously). Once I got used to the programming style, I realized that aside from the lack of sum types I really didn’t find much missing from the language. Perhaps improvements to error handling.

Now, my intention here isn’t really to sell sum types. They’re somewhat controversial for Go, and there are good arguments on both sides. You can see one discussion on this topic here. If I were to make a more concrete proposal I’d probably try to motivate this in much more depth. But even I’m not very strongly of the opinion that Go needs sum types; I have a slight preference for it.

Instead, I’m going to try and sketch this proposal for sum types that has been floating around my mind for a while. I end up mentioning it often and it’s nice to have something to link to. Overall, I think this “fits well” with the existing Go language design.

The proposal

The essence is pretty straightforward: Extend interfaces to allow for “closed interfaces”. These are interfaces that are only implemented for a small list of types.

Writing the Foo sum type above would be:

type Foo interface {
    SomeFunction()
    OtherFunction()
    for string, int
}

It doesn’t even need to have functions defined on it.

The interface functions can only be called if you have an interface object; they are not directly available on variant types without explicitly casting (Foo("...").SomeFunction()).

(I’m not strongly for the for keyword syntax, it’s just a suggestion. The core idea is that you define an interface and you define the types it closes over. Somehow.)

A better example would be an interface for a message-passing system for Raft:

type VoteRequest struct {
    CandidateId uint
    Term uint
    // ...
}

type VoteResponse struct {
    Term uint
    VoteGranted bool
    VoterId uint
}

type AppendRequest struct {
    //...
}

type AppendResponse struct {
    //...
}
// ...
type RaftMessage interface {
    for VoteRequest, VoteResponse, AppendRequest, AppendResponse
}

Now, you use type switches for dealing with these:

switch value := msg.(type) {
    case VoteRequest:
        if value.Term <= me.Term {
            me.reject_vote(value.CandidateId)
        } else {
            me.accept_vote(value.CandidateId, value.Term)
        }
    case VoteResponse: // ...
    case AppendRequest: // ...
    case AppendResponse: // ...
}

There is no need for the default case, unless you wish to leave one or more of the cases out.

Ideally, these could be implemented as inline structs instead of using dynamic dispatch. I’m not sure what this entails for the GC design, but I’d love to hear thoughts on this.

We also make it possible to add methods to closed interfaces. This is in the spirit of this proposal, where you allow

func (message RaftMessage) Process(me Me) error {
    // message handling logic
}

for closed interfaces.

This aligns more with how sum types are written and used in other languages; instead of assuming that each method will be a switch on the variant, you can write arbitrary code that may switch on the type but it can also just call other methods. This is really nice because you can write methods in both ways – if it’s a “responsibility of the inner type” kind of method, require it in the interface and delegate it to the individual types. If it’s a “responsibility of the interface” method, write it as a method on the interface as a whole. I kind of wish Rust had this, because in Rust you sometimes end up writing things like:

match foo {
    Foo::Stringy(s) => s.process(),
    Foo::Numerical(n) => n.process(),
    // ...
}

Yes, this would work better as a trait, but then you lose some niceties of Rust enums. With this proposal Go can have it both ways.


Anyway, thoughts? This is a really rough proposal, and I’m not sure how receptive other Gophers will be to this, nor how complex its implementation would be. I don’t really intend to submit this as a formal proposal, but if someone else wants to they are more than welcome to build on this idea.

What Are Tokio and Async IO All About?

The Rust community lately has been focusing a lot on “async I/O” through the tokio project. This is pretty great!

But for many in the community who haven’t worked with web servers and related things it’s pretty confusing as to what we’re trying to achieve there. When this stuff was being discussed around 1.0, I was pretty lost as well, having never worked with this stuff before.

What’s all this Async I/O business about? What are coroutines? Lightweight threads? Futures? How does this all fit together?

What problem are we trying to solve?

One of Rust’s key features is “fearless concurrency”. But the kind of concurrency required for handling a large amount of I/O bound tasks – the kind of concurrency found in Go, Elixir, Erlang – is absent from Rust.

Let’s say you want to build something like a web service. It’s going to be handling thousands of requests at any point in time (known as the “c10k problem”). In general, the problem we’re considering is having a huge number of I/O bound (usually network I/O) tasks.

“Handling N things at once” is best done by using threads. But … thousands of threads? That sounds a bit much. Threads can be pretty expensive: Each thread needs to allocate a large stack, setting up a thread involves a bunch of syscalls, and context switching is expensive.

Of course, thousands of threads all doing work at once is not going to work anyway. You only have a fixed number of cores, and at any one time only one thread will be running on a core.

But for cases like web servers, most of these threads won’t be doing work. They’ll be waiting on the network. Most of these threads will either be listening for a request, or waiting for their response to get sent.

With regular threads, when you perform a blocking I/O operation, the syscall returns control to the kernel, which won’t yield control back, because the I/O operation is probably not finished. Instead, it will use this as an opportunity to swap in a different thread, and will swap the original thread back when its I/O operation is finished (i.e. it’s “unblocked”). Without Tokio and friends, this is how you would handle such things in Rust. Spawn a million threads; let the OS deal with scheduling based on I/O.

But, as we already discovered, threads don’t scale well for things like this1.

We need “lighter” threads.

Lightweight threading

I think the best way to understand lightweight threading is to forget about Rust for a moment and look at a language that does this well, Go.

Instead of using OS threads, Go has lightweight threads, called “goroutines”. You spawn these with the go keyword. A web server might do something like this:

listener, err = net.Listen(...)
// handle err
for {
    conn, err := listener.Accept()
    // handle err

    // spawn goroutine:
    go handler(conn)
}

This is a loop which waits for new TCP connections, and spawns a goroutine with the connection and the function handler. Each connection will be a new goroutine, and the goroutine will shut down when handler finishes. In the meantime, the main loop continues executing, because it’s running in a different goroutine.

So if these aren’t “real” (operating system) threads, what’s going on?

A goroutine is an example of a “lightweight” thread. The operating system doesn’t know about these, it sees N threads owned by the Go runtime, and the Go runtime maps M goroutines onto them2, swapping goroutines in and out much like the operating system scheduler. It’s able to do this because Go code is already interruptible for the GC to be able to run, so the scheduler can always ask goroutines to stop. The scheduler is also aware of I/O, so when a goroutine is waiting on I/O it yields to the scheduler.

Essentialy, a compiled Go function will have a bunch of points scattered throughout it where it tells the scheduler and GC “take over if you want” (and also “I’m waiting on stuff, please take over”).

When a goroutine is swapped on an OS thread, some registers will be saved, and the program counter will switch to the new goroutine.

But what about its stack? OS threads have a large stack with them, and you kinda need a stack for functions and stuff to work.

What Go used to do was segmented stacks. The reason a thread needs a large stack is that most programming languages, including C, expect the stack to be contiguous, and stacks can’t just be “reallocated” like we do with growable buffers since we expect stack data to stay put so that pointers to stack data to continue to work. So we reserve all the stack we think we’ll ever need (~8MB), and hope we don’t need more.

But the expectation of stacks being contiguous isn’t strictly necessary. In Go, stacks are made of tiny chunks. When a function is called, it checks if there’s enough space on the stack for it to run, and if not, allocates a new chunk of stack and runs on it. So if you have thousands of threads doing a small amount of work, they’ll all get thousands of tiny stacks and it will be fine.

These days, Go actually does something different; it copies stacks. I mentioned that stacks can’t just be “reallocated” we expect stack data to stay put. But that’s not necessarily true — because Go has a GC it knows what all the pointers are anyway, and it can rewrite pointers to stack data on demand.

Either way, Go’s rich runtime lets it handle this stuff well. Goroutines are super cheap, and you can spawn thousands without your computer having problems.

Rust used to support lightweight/”green” threads (I believe it used segmented stacks). However, Rust cares a lot about not paying for things you don’t use, and this imposes a penalty on all your code even if you aren’t using green threads, and it was removed pre-1.0.

Async I/O

A core building block of this is Async I/O. As mentioned in the previous section, with regular blocking I/O, the moment you request I/O your thread will not be allowed to run (“blocked”) until the operation is done. This is perfect when working with OS threads (the OS scheduler does all the work for you!), but if you have lightweight threads you instead want to replace the lightweight thread running on the OS thread with a different one.

Instead, you use non-blocking I/O, where the thread queues a request for I/O with the OS and continues execution. The I/O request is executed at some later point by the kernel. The thread then needs to ask the OS “Is this I/O request ready yet?” before looking at the result of the I/O.

Of course, repeatedly asking the OS if it’s done can be tedious and consume resources. This is why there are system calls like epoll. Here, you can bundle together a bunch of unfinished I/O requests, and then ask the OS to wake up your thread when any of these completes. So you can have a scheduler thread (a real thread) that swaps out lightweight threads that are waiting on I/O, and when there’s nothing else happening it can itself go to sleep with an epoll call until the OS wakes it up (when one of the I/O requests completes).

(The exact mechanism involved here is probably more complex)

So, bringing this to Rust, Rust has the mio library, which is a platform-agnostic wrapper around non-blocking I/O and tools like epoll/kqueue/etc. It’s a building block; and while those used to directly using epoll in C may find it helpful, it doesn’t provide a nice programming model like Go does. But we can get there.

Futures

These are another building block. A Future is the promise of eventually having a value (in fact, in Javascript these are called Promises).

So for example, you can ask to listen on a network socket, and get a Future back (actually, a Stream, which is like a future but for a sequence of values). This Future won’t contain the response yet, but will know when it’s ready. You can wait() on a Future, which will block until you have a result, and you can also poll() it, asking it if it’s done yet (it will give you the result if it is).

Futures can also be chained, so you can do stuff like future.then(|result| process(result)). The closure passed to then itself can produce another future, so you can chain together things like I/O operations. With chained futures, poll() is how you make progress; each time you call it it will move on to the next future provided the existing one is ready.

This is a pretty good abstraction over things like non-blocking I/O.

Chaining futures works much like chaining iterators. Each and_then (or whatever combinator) call returns a struct wrapping around the inner future, which may contain an additional closure. Closures themselves carry their references and data with them, so this really ends up being very similar to a tiny stack!

🗼 Tokio 🗼

Tokio’s essentially a nice wrapper around mio that uses futures. Tokio has a core event loop, and you feed it closures that return futures. What it will do is run all the closures you feed it, use mio to efficiently figure out which futures are ready to make a step3, and make progress on them (by calling poll()).

This actually is already pretty similar to what Go was doing, at a conceptual level. You have to manually set up the Tokio event loop (the “scheduler”), but once you do you can feed it tasks which intermittently do I/O, and the event loop takes care of swapping over to a new task when one is blocked on I/O. A crucial difference is that Tokio is single threaded, whereas the Go scheduler can use multiple OS threads for execution. However, you can offload CPU-critical tasks onto other OS threads and use channels to coordinate so this isn’t that big a deal.

While at a conceptual level this is beginning to shape up to be similar to what we had for Go, code-wise this doesn’t look so pretty. For the following Go code:

// error handling ignored for simplicity

func foo(...) ReturnType {
    data := doIo()
    result := compute(data)
    moreData = doMoreIo(result)
    moreResult := moreCompute(data)
    // ...
    return someFinalResult
}

The Rust code will look something like

// error handling ignored for simplicity

fn foo(...) -> Future<ReturnType, ErrorType> {
    do_io().and_then(|data| do_more_io(compute(data)))
          .and_then(|more_data| do_even_more_io(more_compute(more_data)))
    // ......
}

Not pretty. The code gets worse if you introduce branches and loops. The problem is that in Go we got the interruption points for free, but in Rust we have to encode this by chaining up combinators into a kind of state machine. Ew.

Generators and async/await

This is where generators (also called coroutines) come in.

Generators are an experimental feature in Rust. Here’s an example:

let mut generator = || {
    let i = 0;
    loop {
        yield i;
        i += 1;
    }
};
assert_eq!(generator.resume(), GeneratorState::Yielded(0));
assert_eq!(generator.resume(), GeneratorState::Yielded(1));
assert_eq!(generator.resume(), GeneratorState::Yielded(2));

Functions are things which execute a task and return once. On the other hand, generators return multiple times; they pause execution to “yield” some data, and can be resumed at which point they will run until the next yield. While my example doesn’t show this, generators can also finish executing like regular functions.

Closures in Rust are sugar for a struct containing captured data, plus an implementation of one of the Fn traits to make it callable.

Generators are similar, except they implement the Generator trait4, and usually store an enum representing various states.

The unstable book has some examples on what the generator state machine enum will look like.

This is much closer to what we were looking for! Now our code can look like this:

fn foo(...) -> Future<ReturnType, ErrorType> {
    let generator = || {
        let mut future = do_io();
        let data;
        loop {
            // poll the future, yielding each time it fails,
            // but if it succeeds then move on
            match future.poll() {
                Ok(Async::Ready(d)) => { data = d; break },
                Ok(Async::NotReady(d)) => (),
                Err(..) => ...
            };
            yield future.polling_info();
        }
        let result = compute(data);
        // do the same thing for `doMoreIo()`, etc
    }

    futurify(generator)
}

where futurify is a function that takes a generator and returns a future which on each poll call will resume() the generator, and return NotReady until the generator finishes executing.

But wait, this is even more ugly! What was the point of converting our relatively clean callback-chaining code into this mess?

Well, if you look at it, this code now looks linear. We’ve converted our callback code to the same linear flow as the Go code, however it has this weird loop-yield boilerplate and the futurify function and is overall not very neat.

And that’s where futures-await comes in. futures-await is a procedural macro that does the last-mile work of packaging away this boilerplate. It essentially lets you write the above function as

#[async]
fn foo(...) -> Result<ReturnType, ErrorType> {
    let data = await!(do_io());
    let result = compute(data);
    let more_data = await!(do_more_io());
    // ....

Nice and clean. Almost as clean as the Go code, just that we have explicit await!() calls. These await calls are basically providing the same function as the interruption points that Go code gets implicitly.

And, of course, since it’s using a generator under the hood, you can loop and branch and do whatever else you want as normal, and the code will still be clean.

Tying it together

So, in Rust, futures can be chained together to provide a lightweight stack-like system. With async/await, you can neatly write these future chains, and await provides explicit interruption points on each I/O operation. Tokio provides an event loop “scheduler” abstraction, which you can feed async functions to, and under the hood it uses mio to abstract over low level non-blocking I/O primitives.

These are components which can be used independently — you can use tokio with futures without using async/await. You can use async/await without using Tokio. For example, I think this would be useful for Servo’s networking stack. It doesn’t need to do much parallel I/O (not at the order of thousands of threads), so it can just use multiplexed OS threads. However, we’d still want to pool threads and pipeline data well, and async/await would help here.

Put together, all these components get something almost as clean as the Go stuff, with a little more explicit boilerplate. Because generators (and thus async/await) play nice with the borrow checker (they’re just enum state machines under the hood), Rust’s safety guarantees are all still in play, and we get to have “fearless concurrency” for programs having a huge quantity of I/O bound tasks!

Thanks to Arshia Mufti, Steve Klabnik, Zaki Manian, and Kyle Huey for reviewing drafts of this post

  1. Note that this isn’t necessarily true for all network server applications. For example, Apache uses OS threads. OS threads are often the best tool for the job. 

  2. Lightweight threading is also often called M:N threading (also “green threading”) 

  3. In general future combinators aren’t really aware of tokio or even I/O, so there’s no easy way to ask a combinator “hey, what I/O operation are you waiting for?”. Instead, with Tokio you use special I/O primitives that still provide futures but also register themselves with the scheduler in thread local state. This way when a future is waiting for I/O, Tokio can check what the recentmost I/O operation was, and associate it with that future so that it can wake up that future again when epoll tells it that that I/O operation is ready. (Edit Dec 2018: This has changed, futures now have a built in Waker concept that handles passing things up the stack

  4. The Generator trait has a resume() function which you can call multiple times, and each time it will return any yielded data or tell you that the generator has finished running.