Comment by HippoBaro

3 years ago

The argument here is that Rust chose to implement coroutines the wrong way. It went the route of stackless coroutines that need async/await and colored functions. This creates all the friction the article laments over.

But it also praises Go for its implementation, which is also based on a coroutine of a different kind. Stackful coroutines, which do not have any of these problems.

Rust considered using those (and, at first, that was the project's direction). Ultimately, they went to the stackless operation model because stackfull coroutine requires a runtime that preempts coroutines (to do essentially what the kernel does with threads). This was deemed too expensive.

Most people forget, however, that almost no one is using runtime-free async Rust. Most people use Tokio, which is a runtime that does essentially everything the runtime they were trying to avoid building would have done.

So we are left in a situation where most people using async Rust have the worst of both worlds.

That being said, you can use async Rust without an async runtime (or rather, an extremely rudimentary one with extremely low overhead). People in the embedded world do. But they are few, and even they often are unconvinced by async Rust for their own reasons.

Rust chose to drop the green thread library so that it could have no runtime, supporting valuable use cases for Rust like embedding a Rust library into a C binary, which we cared about. Go is not really usable for this (technically it's possible, but it's ridiculous for exactly this reason). So those sorts of users are getting a lot of benefit from Rust not having a green threading runtime. As are any users who are not using async for whatever reason.

However, async Rust is not using stackless coroutines for this reason - it's using stackless coroutines because they achieve a better performance profile than stackful coroutines. You can read all about it on Aaron Turon's blog from 2016, when the futures library was first released:

http://aturon.github.io/blog/2016/08/11/futures/

http://aturon.github.io/blog/2016/09/07/futures-design/

It is not the case that people using async Rust are getting the "worst of both worlds." They are getting better performance by default and far greater control over their runtime than they would be using a stackful coroutine feature like Go provides. The trade off is that it's a lot more complicated and has a bunch of additional moving parts they have to learn about and understand. There's no free lunch.

  • People love(d) rust because it’s a pleasant language to write code for while also being insanely performant. Async is taking away the first point and making it miserable to write code for. If this trend continues, it’ll ultimately destroy the credibility of the language and people will choose other languages. The proposers of async did not take this into account when they were proposing async

    • Naive question, since I tried my hand at rust years ago, but haven't looked at it since: isn't it possible to write another crate to build go-like channels? A kind of "write, then lose the reference" function call that places a value on a queue, and an accompanying receiver. That could make life easier for "normal" software development.

      3 replies →

Threads are driven by the OS. Something needs to drive couritines, so there's no way around needing some (even rudimentary, like in embedded) executor. But to be a versatile and universal systems language, Rust can't just build-in executor into a language.

I think that stackless coroutines are better than stackfull, in particular for Rust. Everything was done correctly by the Rust team.

Again, this is all fair and good, as long as people understand the tradeoff and make good technical decisions around. If they all jump on async bandwagon blind o the obvious limitations, we get where Rust ecosystem is now.

  • Well people who jumped on async bandwagon are deeply involved in Rust community. So if they do something, others have to assume they are doing it right.

For better or worse, when faced with choices like this Rust has consistently decided to make sure it's workable for the lowest-level usecases (embedded, drivers, etc). I respect the consistency, and I appreciate that it's focused on an under-served market, especially compared to eg. web applications (an over-served market, if anything), even if it's sometimes a bummer for me personally

> Rust considered using those (and, at first, that was the project's direction). Ultimately, they went to the stackless operation model because stackfull coroutine requires a runtime that preempts coroutines (to do essentially what the kernel does with threads). This was deemed too expensive.

Stackful coroutines don't require a preemptive runtime. I certainly hope that we didn't end up with colored functions in Rust because of such a misconception.

  • They often implement soft preemption. Tokio and others like Glommio do. Usually, it's based on interrupts. The runtime schedules a timer to fire an interrupt, and some code is injected into the interrupt handler.

    This is used to keep track of task runtime quotas so they can yield as soon as possible afterward.

    This is the same technique used in Go and many others for preemption. If you don't add this, futures that don't yield can run forever, stalling the system.

    You are right that it is not strictly necessary, but in practice, it is so helpful as a guard against the yielding problem that it's ubiquitous.

    > I certainly hope that we didn't end up with colored functions in Rust because of such a misconception.

    Misconceptions are everywhere unfortunately!

    • Tokio and glommio using interrupts is ironically another misconception. They're cooperatively scheduled so yes, a misbehaving blocking task can stall the scheduler. They can't really interrupt an arbitrary stackless coroutine like a Future due to having nowhere to store the OS thread context in a way that can be resumed (Each thread has its own stack, but now it's stackful with all the concerns of sizing and growing. Or you copy the stack to the task but now have somehow to fixup stack pointers in places the runtime is unaware).

      https://tokio.rs/blog/2020-04-preemption#a-note-on-blocking

      > Tokio does not, and will not attempt to detect blocking tasks and automatically compensate

    • > This is the same technique used in Go and many others for preemption. If you don't add this, futures that don't yield can run forever, stalling the system.

      You may be referring to this particular issue in Go https://github.com/golang/go/issues/10958 which I think was somewhat addresses a couple releases back.

    • > You are right that it is not strictly necessary, but in practice, it is so helpful as a guard against the yielding problem that it's ubiquitous.

      This is honestly shocking to hear. I would think that if people had bugs in their programs they would want them to fail loudly so they can be fixed.

      13 replies →

> because stackfull coroutine requires a runtime that preempts coroutines

I've used stackful coroutines many times in many codebases. It never required or used a runtime or preemption. I'm not sure why having a runtime that preempts them would even be useful, since it defeats the reason most people use stackful coroutines in the first place.

  • "stackful coroutines" the control-flow primitive is cumbersome to build on top of "green threads" but for use cases that are mostly about blocking on lots of distinct I/O calls at the same time people may be indifferent between these two things. These conversations are often muddled because the feature shipped most often is called "async" and not called "jump to another stack please" :(

  • > I've used stackful coroutines many times in many codebases. It never required or used a runtime or preemption.

    Can you tell us which? Go, Haskell and the other usual suspect all have runtime with automatic, transparent preemption.

    • It was always C++ for some type of high-performance data processing engine. Around half the stackful coroutine implementations were off-the-shelf libraries (e.g. Boost::Context) and the other half were purpose-built from scratch, depending on the feature requirements. The typical model is that you have stackful coroutines at a coarse level, e.g. per database query, which may dispatch hundreds of concurrent state machines. All execution and I/O scheduling is explicitly done by the software, which enables some significant runtime optimizations.

      If coroutines can be preempted then it introduces a requirement for concurrency control that otherwise doesn't need to exist and interferes with dynamic cache locality optimizations. These are some of the primary benefits of using stackful coroutines in this context.

      Being able to interrupt a stackful coroutine has utility for dealing with an extremely slow or stuck thread but you want this to be zero-overhead unless the thread is actually stuck. In most system designs, the time required to traverse any pair of sequential yield points is well-bounded so things getting "stuck" is usually a bug.

      Letting end-users inject arbitrary code into these paths at runtime does require the ability to interrupt the thread but even that is often handled explicitly by more nuanced means than random preemption. Sometimes "extremely slow" is correct and expected behavior, so you have to schedule around it.

    • Lua comes with this sort of thing. OCaml, Python, and C have libraries providing this sort of thing in decreasing order of adoption.

      Python also comes with 2 features that seem to be stackless coroutines with attached syntax ceremonies, but one of those 2 features is commonly used with a hefty runtime instead of being used for control flow. JavaScript comes with 2 features named similarly to those of Python, but only one of them seems to be "runtime-free" stackless coroutines.

The reason Rust chose stackless coroutines is because it allows zero cost FFI, which for a systems language is extremely important.