Comment by conaclos

10 days ago

I recently started working with Rust async. The main issue I am currently facing is code duplication: I have to duplicate every function that I want to support both asynchronous and blocking APIs. This could be great to have a `maybe-async`. I took a look at the available crates to work around this (maybe-async, bisync), but they all have issues or hard limitations.

There is work happening on keyword generics[0], which would let a function be generic over keywords like `async` and `const`.

For now the best option to write code that wants to live in both worlds is sans-io. Thomas Eizinger at Fireguard has written a good article about this[1] pattern. Not only does it nicely solve the sync/async issue, but it also makes testing easier and opens the door to techniques like DST[2]

I have my own writing on the topic[3], which highlights that the problem is wider than just async vs sync due to different executors.

0: https://github.com/rust-lang/effects-initiative

1: https://www.firezone.dev/blog/sans-io

2: https://notes.eatonphil.com/2024-08-20-deterministic-simulat...

3: https://hugotunius.se/2024/03/08/on-async-rust.html

  • Keyword generics are probably not happening because it's kinda a hack.

    Algebraic effects are the way forward, but that's a long way off.

    • Yes I hope in the future we can get to what OCaml 5 has with their algebraic effects system, and hopefully fix any flaws we see in there, so that async will just be syntactic sugar over the underlying effects system.

  • Considering the latest commits and issues in effects-initiative are about 2 years old, the keyword generics initiative seems effectively dead.

    • Rust uses Zulip for lang-related discussions. The 't-lang/effects' channel is still somewhat active.

  • I may have missed something, but how does “sans-io” deal with CPU heavy code? For example, if there’s some heavy decoding/encoding required on the data? Does the event loop only drive the network side and the heavy part is done after the loop is finished?

    • This is a great question and there isn't a definitive answer provided in the sources I linked.

      Broadly I think there are three approaches:

      1. For frequent and small CPU heavy tasks, just run them on the IO threads. As long as you don't leave too long between `.await` points (~10ms) it seems to work okay.

      2. Run your sans-io code on a dedicated CPU thread and do IO from an async runtime. This introduces overhead that needs to be weighed against the amount of CPU work.

      3. Have the sans-io code output something like `Output::DoHeavyCompute { .. }` and later feed the result back as `Input::HeavyComputeResult { .. }`, in the middle run the work on a thread pool.

      1 reply →

  • > For now the best option to write code that wants to live in both worlds is sans-io

    Thanks for sharing! Reading the articles, it looks at me, it is a kind of manual reimplementation of the state machine generated by async? This also makes the code harder to reason with. I am unsure if it is worth the complexity.

It'll depend immensely on what you're actually doing, but if it's simple enough you may be able to make a macro that subs out the types & awaits

  • One of the issue I face is a blocking function that takes a generic constrained by a `trait` and its async version takes a generic constrained by an `async trait`.

In my perspective, an "async" function is already an "maybe-async". The distinction between a a `fn -> void` and `fn -> Future<void>` is that the former executes till its end immediately, whereas the other may only finish at another time. If you want to run an async fn in a blocking manner, you would use a blocking executor.