Comment by acdha
16 hours ago
Basically it’s the non-linear execution flow creating situations which are harder to reason about. Here’s an example I’m trying to help a Node team fix right now: something is blocking the main loop long enough that some of the API calls made in various places are timing out or getting auth errors due to the signature expiring between when the request was prepared and when it is actually dispatched because that’s sporadically tend of seconds instead of milliseconds. Because it’s all async calls, there are hundreds of places which have to be checked whereas if it was threaded this class of error either wouldn’t be possible or would be limited to the same thread or an explicit synchronization primitive for something like a concurrency limit on the number of simultaneous HTTP requests to a given target. Also, the call stack and other context is unhelpful until you put effort into observability for everything because you need to know what happened between hitting await and the exception deep in code which doesn’t share a call stack.
I still don't get it.
The execution flows of individual async tasks are still linear, much like individual threads are linear.
Scheduling (tasks by the async runtime vs threads by the OS), however results in random execution order either way.
If there is a slow resource, both, async tasks as well as threads will pile potentially increasing response times.
Wether async or threads, you can easily put a concurrency limit on resources using e.g. semaphores [1]:
- limit yourself to x connections (either wait or return an error)
- limit the resource to x concurrent usages (either wait until other users leave, or return an error)
Regarding blocking the main loop: with async and non-blocking operations, how would something block the main loop? And why would the main loop being blocked cause API calls being timing out? Is it single threaded?
[1]: https://docs.rs/tokio/latest/tokio/sync/struct.Semaphore.htm...