Comment by Xeoncross
5 days ago
I'm so glad to be out of the dark ages of parallelism. Complaining about Go's race detector or exactly which types of logical races Rust can't prevent is such a breath of fresh air compared to all those other single-core languages we're paid to write with that had threading, async, or concurrency bolted-on as an afterthought.
I can only hope Go and Rust continue to improve until the next language generation comes along to surpass them. I honestly can't wait, things improved so much already.
You know how a modern language like Rust doesn't have the unstructured control flow with features like "goto"† but only a set of structured control flow features, such as pattern matching, conditionals, loops and functions?
Structured Concurrency is the same idea, but for concurrency. Instead of that code to create an appropriate number of threads, parcel out work, and so on, you just express high level goals like "Do these N pieces of work in any order" or "Do A and B, and once either is finished also do C and D" and just as the language handles the actual machine code jumps for your control flow, that would happen for concurrency too.
Nothing as close to the metal as Rust has that baked in today, but it is beginning to be a thing in languages like Swift and you can find libraries which take this approach.
† C's goto is de-fanged from the full blown go-to arbitrary jump in early languages, but it's still not structured control flow.
> Nothing as close to the metal as Rust has that baked in today
Rust's futures/streams are basically what you're asking for. You need a crate rather than just the bare language but I don't think that's a particularly important distinction.
> Nothing as close to the metal as Rust has that baked in today
You should have a look at what's going on in Scala-land, with scala-native¹ (and perhaps the Gears² library for direct style/capabilities)
I like this style, though it's been too new and niche to get a taste of it being used at scale.
¹: https://scala-native.org/ ²: https://github.com/lampepfl/gears
Rust async streams or rayon come very close to what you describe as structured concurrency. Actually much closer than anything I saw in other mainstream languages eg Java or Go.
Rayon is about as pure an example of it as you can imagine. In a lot of cases you just need to replace iter() with par_iter() and it just works.
1 reply →
> Actually much closer than anything I saw in other mainstream languages eg Java or Go.
https://github.com/sourcegraph/conc
The ultimate argument against goto was the proof that structured concurrency could express any flowchart simply by using the switch statement.
Is there a similar proof for structured concurrency - that it can express anything that unstructured concurrency can?
That is typical Go design school, even the channels stuff, we already had that in Java and .NET ecosystem, even if the languages don't have syntax sugar for launching co-routines.
But go-routines!
Well, on .NET land we would be using Task Processing Library, or Dataflow built on top of it, with tasks being switched over the various carrier threads.
Or if feeling fancy, reach out to async workflows on F# with computation expressions, even before async/await came to be.
While on the Java side, we would be using java.util.concurrent, with future computations, having fun with Groovy GPars, or Scala frameworks like Akka.
In both platforms, we could even go the extra mile and write our own scheduling algorithms, how those lightweight threads would be mapped into carrier threads.
Naturally not having some of the boilerplate to handle all of that, or using multiple languages on the same project, makes it easier, hence why now we have all those goodies, with async/await or virtual threads on top.
IMHO, shared memory parallelism as the norm, means we're still in the dark ages.
Yes, shared memory is useful sometimes, but I don't think it should be the norm. But I've done parallel stuff in lots of languages, most recently Erlang and Rust... Message passing is so much nicer than having threads all mucking about in the same data if you don't need them to. You can write message passing parallel code in Rust, but it's not the norm, and you'll have to do a lot of the plumbing.
My guess is that next the language gen will be languages that AI generates, which are optimized to be readable to humans and writable by AI. Maybe even two layers, one layer that is optimized for human skimming, and another layer that actually compiles, which is optimized for AI to generate and for the computer to compile.
For the current category of LLM based AI, "AI optimised" means "old and popular". Even if you add a layer that has much more details but may be a lot more verbose or whatever, that layer would not be "AI optimised".
> which are optimized to be readable to humans and writable by AI
How might a language optimized for AI look different than a language optimized for humans?
especially when LLMs "speak" human language.