← Back to context

Comment by HippoBaro

3 years ago

They often implement soft preemption. Tokio and others like Glommio do. Usually, it's based on interrupts. The runtime schedules a timer to fire an interrupt, and some code is injected into the interrupt handler.

This is used to keep track of task runtime quotas so they can yield as soon as possible afterward.

This is the same technique used in Go and many others for preemption. If you don't add this, futures that don't yield can run forever, stalling the system.

You are right that it is not strictly necessary, but in practice, it is so helpful as a guard against the yielding problem that it's ubiquitous.

> I certainly hope that we didn't end up with colored functions in Rust because of such a misconception.

Misconceptions are everywhere unfortunately!

Tokio and glommio using interrupts is ironically another misconception. They're cooperatively scheduled so yes, a misbehaving blocking task can stall the scheduler. They can't really interrupt an arbitrary stackless coroutine like a Future due to having nowhere to store the OS thread context in a way that can be resumed (Each thread has its own stack, but now it's stackful with all the concerns of sizing and growing. Or you copy the stack to the task but now have somehow to fixup stack pointers in places the runtime is unaware).

https://tokio.rs/blog/2020-04-preemption#a-note-on-blocking

> Tokio does not, and will not attempt to detect blocking tasks and automatically compensate

> This is the same technique used in Go and many others for preemption. If you don't add this, futures that don't yield can run forever, stalling the system.

You may be referring to this particular issue in Go https://github.com/golang/go/issues/10958 which I think was somewhat addresses a couple releases back.

> You are right that it is not strictly necessary, but in practice, it is so helpful as a guard against the yielding problem that it's ubiquitous.

This is honestly shocking to hear. I would think that if people had bugs in their programs they would want them to fail loudly so they can be fixed.

  • As someone else said, it is not, strictly speaking, a bug. If your server receives a request that requires very computationally expensive work, is it okay to delay every other request on that core? That's probably not okay, and it'll show in your latency distribution.

    Folks would rather have every future time sliced so that other tasks get some CPU time in a ~fair way (after all, there is no concept of task priority in most runtime).

    But you're right: it isn't required, and you could sprinkle every loop of your code with yielding statements. But knowing when to yield is impossible for a future. If nothing else is running, it shouldn't yield. If many things are running but the problem space of the future is small, it probably shouldn't yield either, etc.

    You simply do not have the necessary information in your future to make an informed decision. You need some global entity to keep track of everything and either yield for you or tell you when you should yield. Tokio does the former, Glommio does the latter.

    It gets even more complex when you add IO into the mix because you need to submit IO requests in a way that saturates the network/nvme drives/whatever. So if a future submits an IO request, it's probably advantageous to yield immediately afterward so that other futures may do so as well. That's how you maximize throughput. But as I said, that's a very hard problem to solve.

    • Trying to solve the problem by frequently invoking signal handlers will also show in your latency distribution!

      I guess if someone wants to use futures as if they were goroutines then it's not a bug, but this sort of presupposes that an opinionated runtime is already shooting signals at itself. Fundamentally the language gives you a primitive for switching execution between one context and another, and the premise of the program is probably that execution will switch back pretty quickly from work related to any single task.

      I read the blog about this situation at https://tokio.rs/blog/2020-04-preemption which is equally baffling. The described problem cannot even happen in the "runtime" I'm currently using because io_uring won't just completely stop responding to other kinds of sqe's and only give you responses to a multishot accept when a lot of connections are coming in. I strongly suspect equivalent results are achievable with epoll.

      10 replies →

  • There's nothing buggy about a future that never yields because it can always make progress, but people prefer that a runtime doesn't let all other execution get starved by one operation. That makes it a problem that runtimes and schedulers work to solve, but not a bug that needs to be prevented at a language level. A runtime that doesn't solve it isn't buggy, but probably isn't friendly to use, like how Go used to have problems with tight loops and they put in changes to make them cause less starvation.