Ditch your (mut)ex, you deserve better

7 days ago (chrispenner.ca)

I think STM looks pretty cool in toy examples, but in practice it's pretty bad. Very difficult for me to make strong logical argument about this, just based on how it feels.

In Clojure, there are first-class STM primitives with retriable transactions in the standard lib, but every time I tried to use them, the code was slow, difficult to reason about, full of "colored" functions, and non-composable.

Also, unintuitively, the code starts immediately smelling slightly "wrong", as if I tried to put into code my first childish thoughts instead of actually thinking about how the system _should_ work. Like the notorious bank accounts example: yeah, cool, two accounts' balances are TVars — but what if I have 10M accounts? Is the list of accounts also a TVar since sometimes new accounts are created? Are indexes on the list also TVars? How do you persist the lists, how do you synchronize slow persistence and in-memory transactions? How do you synchronize all the deposits with a backup/failover machine? As you continue trying to untangle all this with STM, you start drowning in complexity, and encountering huge contention among transactions touching many parts of your systems, and spewing countless retry logs.

It's not only my own experience — I believe it's widely accepted in Clojure community that nobody actually uses STM, and instead uses simple atomic updates or queues.

  • I have a similar experience, though it's not necessarily related to shortcomings in the STM implementation (which I always felt was quite competent in the space). To expand, I think a few things are true:

    1. I've come to feel that actually managing concurrency is a "below the fold" competency for most engineers. I'm not saying it should be, and people need to understand what a data race is. 2. As you pointed out, most state in services that I work on is not in need of coordinated updates, so atoms are where you want to be, or single-writer queues. 100% agreed 3. Structured concurrency approaches, fibers, and other abstracted runtimes seem to be the best way to get concurrent and parallel work, given points 1 and 2 about how coordinated state really isn't that common for most work on my team

    A lot of times when we're updating state we're more having to think about it in the database, employing either a last-write-wins strategy or when coordination is in need, some form of pessimistic or optimistic locking.

>STM is an optimistic concurrency system. This means that threads never block waiting for locks. Instead, each concurrent operation proceeds, possibly in parallel, on their own independent transaction log. Each transaction tracks which pieces of data it has accessed or mutated and if at commit time it is detected that some other transaction has been committed and altered data which this transaction also accessed, then the latter transaction is rolled back and is simply retried.

I already foresaw (and it gets mentioned later), the problem that if you have many small, frequent operations, they will prevent a big, long operation from happening because they will always change the state and cause conflicts before the big one can finish. You can easily code yourself an app that will softlock forever.

The post doesn't offer a good solution (it talks about one where there's a tradeoff, but you don't need tradeoffs for this)

The way this gets fixed it is to make the lock acquisition (or in this case, the priority of the merge?) preemptive (Wound-Wait)

All transactions have a global, ever incrementing number attached to them, their ID, which we call "seniority", if you try to get a a lock, and the lock is held by a transaction with a lower seniority (=> a higher ID), you kill the transaction, take the lock, and once you're done the transaction you killed is allowed to retry

In the meantime, if a transaction with the lower seniority tries to get the lock, it gets blocked

This insures that your program will always finish.

In the case of "lots of frequent small transactions" + "one big, long report", the report will get killed a few times, until your report inevitably becomes the most senior transaction to ask for this resource, and is allowed to complete.

I found that a lot of the problems I had been having with mutexes, stem from the fact that traditionally the mutex and the data it protects are separate. Bolting them together, like Rust's Mutex<T> does, solves a lot these problems. It let's you write normal, synchronous code and leave the locking up to the caller, but without making it a nightmare. You can't even access the data without locking the mutex.

This isn't an attack on the (very well written) article though. Just wanted to add my two cents.

  • Mutexes suffer from a host of problems, and imo are not a very good concurrency primitive - they were designed to turn single-threaded code into multi-threaded. With todays 8+ cores in most systems, usually a single point of contention quickly becomes a problem.

    They're liable to deadlocks/livelocks, and sometimes not only with other explicitly Mutex-like things (it might happen some library you use has a lock hidden deep inside).

    They're also often backed byOS primitives (with big overheads) with inconsistent behaviors between platforms (spinlocks, waiting etc). We've run into an issue with .NET, that their version of Mutex didn't wake up the blocked thread on Linux as fast as on Windows, meaning we needed about 100x the time to serve a request as the thread was sleeping too long.

    There are questions like when to use spinlocks and when to go to wait sleep, which unfortunately the developer has to answer.

    Not assigning blame here, just pointing out that threading primitives and behaviors don't translate perfectly between OSes.

    Multi-threading is hard, other solutions like queues suffer from issues like backpressure.

    That's why I'm skeptical about Rust's fearless concurrency promise - none of these bugs are solved by just figuring out data races - which are a huge issue, but not the only one.

    • Your view on mutex performance and overhead is outdated, at least for the major platforms: The Rust standard library mutex only requires 5 bytes, doesn't allocate, and only does a syscall on contention. The mutex implementation in the parking_lot library requires just 1 byte per mutex (and doesn't allocate and only does a syscall on contention). This enables very fine-grained, efficient locking and low contention.

      16 replies →

  • Traditionally traditionally, monitors were declared together with the data they contained, and the compiler enforced that the data was not accessed outside the monitor. Per Brinch Hansen wrote a rather bitter broadside against Java's concurrency model when it came out.

  • > You can't even access the data without locking the mutex.

    It's even nicer than that: you can actually access data without locking the mutex, because while you hold a mutable borrow to the mutex, Rust statically guarantees that no one else can acquire locks on the mutex.

    https://doc.rust-lang.org/std/sync/struct.Mutex.html#method....

    • Given a data item of non-thread safe type (i.e. not Mutex<T> etc), the borrow checker checks that there's only ever one mutable reference to it. This doesn't solve concurrency as it prevents multiple threads from even having the ability to access that data.

      Mutex is for where you have that ability, and ensures at runtime that accesses get serialized.

      8 replies →

  • I find it better to model that as an Actor than a mutex, but I guess it's inherently the same thing, except the actor also allows asynchronous operations.

    • You can go full circle and also make operations on a mutex asynchronous. Hence the realization that message passing and shared memory are truly dual.

      6 replies →

  • Sounds like the Java synchronized class.

    • No. It’s not a property of the type so you can have multiple items under a mutex and you’re not at the mercy of whoever wrote it, it works fine with POD types, it does not force a lock / unlock on each method call (instead the compiler essentially ensures you hold the lock before you can access the data), and the borrow checker is there to ensure you can not leak any sort of sub-states, even though you can call all sorts of helpers which have no requirement to be aware of the locking.

      It’s what synchronized classes wish they had been, maybe.

    • Not at all. With rust you cannot accidentally leak a reference, and here's the killer: it guarantees these properties at compile time.

This felt like a blast from the past. At a few times reading this article, I had to go back and check that, yes, it's actually a new article from the year 2025 on STM in Haskell and it's even using the old bank account example.

I remember 15 or 20 years (has it been that long?) when the Haskell people like dons were banging on about: 1) Moore's law being dead, 2) future CPUs will have tons of cores, and 3) good luck wrangling them in your stone age language! Check out the cool stuff we've got going on over in Haskell!

  • Yeah, remember when we used to care about making better programming languages that would perform faster and avoid errors, instead of just slapping blockchains or AI on everything to get VC money. Good times.

  • And it's all still true, although I would offer the usual argument that concurrency!=parallelism, and if you reach for threads&STM to try to speed something up, you'll probably have a bad time. With the overhead of GC, STM-retries, false-sharing, pointer-chasing, etc you might have a better time rewriting it single-threaded in C/Rust.

    STM shines in a concurrent setting, where you know you'll multiple threads accessing your system and you want to keep everything correct. And nothing else comes close.

  • To be maximally fair to the Haskell people, they have been enormously influential. Haskell is like Canada: you grow up nicely there and then travel the world to bring your energy to it.

  • I think Intel x86 had some hardware support for STM at some point. Not sure what's the status of that.

    • Disabled on most CPUs, plagued by security issues. I haven't used it but I assume debugging would be extremely painful, since any debug event would abort the transaction.

I've been excited about STMs since I read "Composable Memory Transactions" back in 02005, shortly before it was published, and I still believe in the glorious transactional future, but it's difficult to adopt an STM piecemeal; it kind of wants to own your entire program, the same way that garbage collection or nonblocking I/O do, and more so than multithreading with locks. You kind of have to commit entirely to an STM. The efforts in C# to adopt an STM ending around 02010 were a disaster as a result.

The article says a couple of things about STMs that are not true of STMs in general, just true of the Haskell STM the author is familiar with, like a small Brazilian child confidently telling you that almost everyone speaks Portuguese.

One of these is, "STM is an optimistic concurrency system." The possibility of making your concurrency 100% lock-free is one of the most appealing things about STMs, and I think it could be a key to solving the UI latency problem, which just keeps getting worse and worse. Actors and CSP don't normally help here; an Actor is just as "blocking" as a lock. But you can implement an STM with partly pessimistic concurrency, or purely pessimistic, and it might even be a good idea.

Another is, "One last benefit of STM which we haven't yet discussed is that it supports intelligent transaction retries based on conditions of the synchronized data itself." This was an innovation introduced by "Composable Memory Transactions", and many STMs do not support it, including Keir Fraser's awesomely fast version. I am even less certain that it is the correct tradeoff for all uses than I am about purely optimistic synchronization.

But all of this is why I'm rereading Gray and Reuter's Transaction Processing right now after 25 years. With the benefit of 35 years of hindsight, it's a frustrating mix of inspiring long-range vision and myopic boneheadedness. But it shares a lot of hard-won wisdom about problems like long transactions that pop up in a new guise in STMs.

  • > 02005, 02010

    Are you planning for this comment to be relevant for long enough that the leading zeros will be helpful for disambiguation?

    • interesting, the notation also implies 2005 and 2010 A.D and not B.C, or maybe the notation is about exactly A.D? either way, interesting choice if it was intentional. we say “year 515” without disambiguation right

    • Clearly they are referring to the years 1029 and 1032 (decimal). I just want to know what calendar system they're using...

      1 reply →

> A data race occurs any time two threads access the same memory location concurrently and non-deterministically when at least one of the accesses is a write.

From what I understand of the C++ memory model (shared by C and Rust), this is not the definition of data race – a data race occurs when two or more threads access memory concurrently where at least one access is a write and the accesses are unsynchronized. However, synchronized accesses may not have a deterministic ordering, in which case a race condition occurs.

(Confusing as it may be, I believe this is standard terminology.)

In C++ (with a bit of help from boost).

   bool transfer(boost::synchronized<Account>& sync_from,
                 boost::synchronized<Account>& sync_to, int amount) {
         auto [from, to] =  synchronize(sync_from, sync_to);
         if (from->withdraw(amount)) {
            to->deposit(amount)
            return true;
         } else {
            return false
         }           
   }

Hopefully synchronized will make it into the standard library at some point, in the meantime it is not terribly hard to write it yourself if you do not want a boost dependency.

  • Does this avoid the dining philosopher deadlock?

    • yes, 'synchronize' uses a try_lock/backoff algorithm, same as std::scoped_lock.

      edit: it could theoretically livelock, but I believe most if not all STM implementations also do not guarantee forward progress.

It kind of threw me off that we went from golang to Haskell so would love to see they bank example actually comeback full circle to golang

This is a nice article, and I appreciate the examples. The next problem to solve is how to persist state on disk across two different accounts after a transfer has been done.

A nice trick in go is embedding sync.mutex into any type eg account.Lock() / Unlock()

Shared data is hard no matter what you do. Parallelism and shared data do not mix. STM makes some things easier, but you still will run into problems if you have a lot of it. You must design your code such that you spend a lot of CPU cycles doing single thread no shared data calculations between every place where you need to share data. If you can't do that you can't do parallelism.

When there are only a few places where data needs to be shared a mutexs works since you put your best programmer on maintaining just that code and with only a few places they can figure out it. You can also make a variable atomic, which sometimes works better than a mutex and sometimes worse. You can use STM. However no matter what you use the reality of synchronizing between cores means you can't do any of the above "very often".

I wonder what happened to hardware transactional memory. Your CPU caches already keep track of which core is keeping which line in its cache and whether they have modified it via the MESI protocol:

https://en.wikipedia.org/wiki/MESI_protocol

So it has most of the hardware to support transactional memory, only it's not exposed to the user.

Intel had their own version, but it turned out it was buggy, so they disabled it and never put it in any subsequent CPU so that was that.

The fundamental problem here is shared memory / shared ownership.

If you assign exclusive ownership of all accounting data to a single thread and use CSP to communicate transfers, all of these made up problems go away.

  • Yes, multithreaded problems go away on a single thread.

    Is there any way for an external thread to ask (via CSP) for the state, think about the state, then write back the new state (via CSP)?

    If so, you're back to race conditions - with the additional constraints of a master thread and CSP.

  • CSP suffers from backpressure issues (which is not to say its bad, but it's not a panacea either)

>Moore's law is dying, beefy single cores are no longer keeping up.

On the other hand, there are many other things that could be done to avoid wasting all the extra power gained over the years which don't even require any parallelism boost.

So cool! Any languages support STM first class besides Haskell?

I've found it interesting to think about trying to adopt data structures like CRDT designed for distributed systems to the problem of local consistency between CPU-local data structures spread across cores for parallelism.

It is rather long-winded, and ends with a donation request. I don't like that style of writing.

> If the programmer forgets to lock the mutex the system won't stop them from accessing the data anyways, and even then there's no actual link between the data being locked and the lock itself, we need to trust the programmers to both understand and respect the agreement. A risky prospect on both counts.

Not every language is like that. I would not use language (that allows this) for parallel programming. You can use other ways but how can you guarantee everyone who will edit the code-base will not use unenforced mutex?

c# offers a very convinient way to pack data access and locking into one thing. the "lock" instruction. it does hover not let you lock more that one "resource" at a time.

  • All of the "my non-Haskell language does this" comments in the thread are the same (with maybe a Rust exception).

    The "lock" instruction is what the article is telling you to ditch.

    > If the programmer forgets to lock the mutex the system won't stop them from accessing the data anyways

    If the programmer forgets to "lock"

    > and even then there's no actual link between the data being locked and the lock itself

    lock (thing) { return thing.contents // shared, mutable array given out freely to the world }

    'contents' has no notion that it has anything to do with this "lock"ing thing.

>Ugh, a correct transfer function should conceptually just be the composition of our well encapsulated withdraw and a deposit functions, but defining it correctly has forced us to remove the locking from both withdraw and deposit, making both of them less safe to use.

I know OOP isn't cool any more, but the above is what OOP solves.

TFA's transfer() and withdraw() functions aren't compliant with double-entry accounting anyways, so you'd mark them private and only expose Transfer to callers. Let the class handle its own details.

  • OOP does not provide any help in solving the problem in question, and indeed encapsulation is a major obstacle to solving it. I think you haven't understood what problem is being discussed.

  • > I know OOP isn't cool any more, but the above is what OOP solves.

    The article was a lengthy explanation of why the problem occurs in non-STM settings. In OO settings.

    What do you propose as an OO solution?

When working with high throughput concurrent code like consumer producer pipelines, it's better to avoid shared mutable state entirely. Something actor like fits better and C# or Go channels works wonders.

  • > it's better to avoid shared mutable state entirely.

    Yes! I would even go so far as to have the type system separate the mutable from the non-mutable for this reason!

    > Something actor like fits better and C# or Go channels works wonders.

    Or even STM channels/queues:

      https://hackage.haskell.org/package/stm/docs/Control-Concurrent-STM-TChan.html
      https://hackage.haskell.org/package/stm/docs/Control-Concurrent-STM-TQueue.html

  • Account balance is necessarily a shared mutable state.

    • It's not necessarily shared. You could assign a single thread to own the account balances, reading requests from a message queue. That probably scales better than locking. A single thread can do several million transactions per second, more than the entire global financial system.

      2 replies →

  • Sure, but sometimes shared mutable state is better, especially from a performance point of view. For example blurring an image.

    • Isn't that a typical case where you don't have to share anything? Divide the image into N chunks, let N threads handle each one, no sharing, just need a single sync point at the end to wait on completion.

tl;dr mutexes are evil because they don't compose, STM is the solution because it does compose, otherwise just avoid shared state, or even state entirely.

Not anything that's not already covered in any undergraduate CS course.

  • Which CS course did you go to?

    • You made me check the programme of many university courses.

      Most parallel programming courses are at Masters level and require specialization, but those are much more advanced (sorting networks, distributed computing on supercomputers and GPU, consensus algorithms, parallelization of linear solvers...)

      There are undergraduate courses that cover simple things like multithreading patterns in mainstream languages, but it seems they are only available if you go to an education institution with both a practical mindset and a firm grip of the fundamentals, which is unfortunately quite rare, as most institutions tend to specialize in one or the other.

      1 reply →