Comment by movpasd
3 months ago
I think the coloured function problem boils down to the fact that async functions are not naturally a specific kind of sync function, but the other way around.
Functions are so ubiquitous we forget what they really are: a type of guarantee about the conditions under which the code within will run. Those guarantees include the availability of arguments and a place to put the return value (on the stack).
One of the key guarantees about sync functions is the call structure: one thread of execution will be in one function and one function only at any point during the program; the function will only be exited on return (or exception, or panic) or call of another function; and all the local data will be available only for the duration of that function call.
From that perspective, async functions are a _weakening_ of the procedural paradigm where it is possible to "leave behind" an instruction pointer and stack frame to be picked up again later. The ability to suspend execution isn't an additional feature, it's a missing guarantee: a generalisation.
There is always an interplay between expressiveness and guarantees in programming languages. Sometimes, it is worth removing a guarantee to create greater expressiveness. This is just an example of that.
I mentioned exceptions earlier — it's no wonder that exceptions and async both get naturally modelled in the same way (be it with monads or algebraic effects or whatever). They are both examples of weakening of procedural guarantees. Exceptions weaken the guarantee that control flow won't exit a function until it returns.
I think the practical ramifications of this are that languages that want async should be thinking about synchronous functions as a special case of suspendable functions — specifically the ones that don't suspend.
As a counterpoint, I can imagine a lot of implementation complexities. Hardware is geared towards the classical procedural paradigm, which provides an implementation foundation for synchronous procedures. The lack of that for async can partially explain why language authors often don't provide a single async runtime, but have this filled in by libraries (I'm thinking of Rust and Kotlin here).
The meaningful distinction seems to me seems to be the temporal guarantees of execution, not the call structure. Imagine a sequence of async functions that are sleep sorted to execute 1 day apart. A sufficiently smart compiler could compile those to the sync equivalent because it can see the ordering. Similarly, imagine an async runtime that just calls everything synchronously. I've BS'd an interview with that one before.
The "sync guarantees" don't really exist either. If you have a(); b(); the compiler may very well reorder them to b(); a(); and give you similar issues as async. It may elide a() entirely (and reclaim the call structures), or the effects of a() might not be visible to b() yet. Synchronous functions also can and do suspend with all the associated issues of async. That comes up frequently in cryptography, kernel, and real time code.
My comment was really about language semantics. Compilers should respect the semantics and, for instance, only re-order a(); and b(); if there is no data dependency between them and therefore no consequence to exchanging them. But that's in theory: all abstractions leak.
You can I believe emulate async with call-cc (call with current continuation), but I'm not aware of work in that area and not a lot of non-LISP languages support this kind of continuation.