Your blog footer mentions that code samples are GPL unless otherwise noted. You don't seem to note otherwise in the article, so -- do you consider these snippets GPL licensed?
Something to add to this; if you're focussing on these low-level optimizations, make sure the device this code runs on is actually tuned.
A lot of people focus on the code and then assume the device in question is only there to run it. There's so much you can tweak. I don't always measure it, but last time I saw at least a 20% improvement in Network throughput just by tweaking a few things on the machine.
kernel tickrate is a pretty big one, most people don't bother and use what their OS ships with.
Disabling c-states, pinning network interfaces to dedicated cores (and isolating your application from those cores) and `SCHED_FIFO` (chrt -f 99 <prog>) helps a lot.
Transparent hugepages increase latency without you being aware of when it happens, I usually disable that.
Idk, there's a bunch but they all depend on your use-case. For example I always disable hyperthreading because I care more about latency than processing power- and I don't want to steal cache from my workload randomly.. but some people have more I/O bound workloads and hyperthreading is just and strict improvement in those situations.
Random idea: If you have a known sentinel value for empty could you avoid the reader needing to read the writer's index? Just try to read, if it is empty the queue is empty, otherwise take the item and put an empty value there. Similarly for writing you can check the value, if it isn't empty the queue is full.
It seems that in this case as you get contention the faster end will slow down (as it is consuming what the other end just read) and this will naturally create a small buffer and run at good speeds.
The hard part is probably that sentinel and ensuring that it can be set/cleared atomically. On Rust you can do `Option<T>` to get a sentinel for any type (and it very often doesn't take any space) but I don't think there is an API to atomically set/clear that flag. (Technically I think this is always possible because the sentinel that Option picks will always be small even if the T is very large, but I don't think there is an API for this.)
Yeah, or you could put a generation number in each slot adjacent to T and a read will only be valid if the slot's generation number == the last one observed + 1, for example. But ultimately the reader and writer still need to coordinate here, so we're just shifting the coordination cache line from the writer's index to the slot.
I think the key difference is that they only need to coordinate when the reader and writer are close together. If that slows one end down they naturally spread apart. So you don't lose throughput, only a little latency in the contested case.
Would you mind expanding on the correctness guarantees enforced by the atomic semantics used? Are they ensuring two threads can't push to the same slot nor pop the same value from the ring? These type of atomic coordination usually comes from CAS or atomic increment calls, which I'm not seeing, thus I'm interested in hearing your take on it.
It's lock-free because it uses ordered loads and stores, which is also how you implement locks. I find the semantic distinction unconvincing. The post is really about how slow the default STL mutex implementation is.
Don't most people use C++11 atomics now? You have SeqCst, Release, Acquire, and Relaxed (with Consume deprecated due to the difficulty of implementing it). You can do loads, stores, and exchanges with each ordering type. Zig, Rust, and C all use the same orderings. I guess Java has its own memory model since it's been around a lot longer, but most people have standardized around C++'s design.
Which is a slight shame since Load-Linked/Store-Conditional is pretty cool, but I guess that's limited to ARM anyways, and now they've added extensions for CAS due to speed.
I've taken an interest in lock-free queues for ultra-low power embedded... think Cortex-m0, or even avr/pic.
Things get interesting when you're working with a cpu that lacks the ldrex/strem assembly instructions that makes this all work. I think youre only options at that point are disable/enable interrupts. IF anyone has any insights into this constraint I'd love to hear it.
Really? Pretty much all atomics i’ve used have load,
store of various integer sizes. I wrote a ring buffer in Go that’s very similar to the final design here using similar atomics.
They generally map directly to concepts in the CPU architecture. On many architectures, load/store instructions are already guaranteed to be atomic as long as the address is properly aligned, so atomic load/store is just a load/store. Non-relaxed ordering may emit a variant load/store instruction or a separate barrier instruction. Compare-exchange will usually emit a compare and swap, or load-linked/store-conditional sequence. Things like atomic add/subtract often map to single instructions, or might be implemented as a compare-exchange in a loop.
The exact syntax and naming will of course differ, but any language that exposes low-level atomics at all is going to provide a pretty similar set of operations.
It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Also doesn't have fences on the store, has extra branches that shouldn't be there, and is written in really stylistically weird c++.
Maybe an llm that likes a different language more, copying a broken implementation off github? Mostly commenting because the initial replies are "best" and "lol", though I sympathise with one of those.
There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
From 12M ops/s to 305 M ops/s on a lock-free ring buffer.
In this post, I walk you step by step through implementing a single-producer single-consumer queue from scratch.
This pattern is widely used to share data between threads in the lowest-latency environments.
Your blog footer mentions that code samples are GPL unless otherwise noted. You don't seem to note otherwise in the article, so -- do you consider these snippets GPL licensed?
Actually I'm not sure. GPL was for source code of the website itself
I guess the code samples inside post are under https://david.alvarezrosa.com/LICENSE
But feel free to ping me if you need different license, quite open about it
It would be nice to have an example use case where the technique would show a benefit.
It seems relatively rare to have a single producer and consumer thread, and be worth polling a ring buffer.
Something to add to this; if you're focussing on these low-level optimizations, make sure the device this code runs on is actually tuned.
A lot of people focus on the code and then assume the device in question is only there to run it. There's so much you can tweak. I don't always measure it, but last time I saw at least a 20% improvement in Network throughput just by tweaking a few things on the machine.
Agreed. For benchmarking I used this <https://github.com/david-alvarez-rosa/CppPlayground/blob/mai...> which relies on GoogleBenchmark and pins producer/consumer threads to dedicated CPU cores
What else could be improved? Would like to learn :)
Maybe using huge pages?
kernel tickrate is a pretty big one, most people don't bother and use what their OS ships with.
Disabling c-states, pinning network interfaces to dedicated cores (and isolating your application from those cores) and `SCHED_FIFO` (chrt -f 99 <prog>) helps a lot.
Transparent hugepages increase latency without you being aware of when it happens, I usually disable that.
Idk, there's a bunch but they all depend on your use-case. For example I always disable hyperthreading because I care more about latency than processing power- and I don't want to steal cache from my workload randomly.. but some people have more I/O bound workloads and hyperthreading is just and strict improvement in those situations.
3 replies →
Random idea: If you have a known sentinel value for empty could you avoid the reader needing to read the writer's index? Just try to read, if it is empty the queue is empty, otherwise take the item and put an empty value there. Similarly for writing you can check the value, if it isn't empty the queue is full.
It seems that in this case as you get contention the faster end will slow down (as it is consuming what the other end just read) and this will naturally create a small buffer and run at good speeds.
The hard part is probably that sentinel and ensuring that it can be set/cleared atomically. On Rust you can do `Option<T>` to get a sentinel for any type (and it very often doesn't take any space) but I don't think there is an API to atomically set/clear that flag. (Technically I think this is always possible because the sentinel that Option picks will always be small even if the T is very large, but I don't think there is an API for this.)
Yeah, or you could put a generation number in each slot adjacent to T and a read will only be valid if the slot's generation number == the last one observed + 1, for example. But ultimately the reader and writer still need to coordinate here, so we're just shifting the coordination cache line from the writer's index to the slot.
I think the key difference is that they only need to coordinate when the reader and writer are close together. If that slows one end down they naturally spread apart. So you don't lose throughput, only a little latency in the contested case.
2 replies →
Great article, thanks for sharing. And such a lovely website too :)
Thanks for the feedback <3
Great post!
Would you mind expanding on the correctness guarantees enforced by the atomic semantics used? Are they ensuring two threads can't push to the same slot nor pop the same value from the ring? These type of atomic coordination usually comes from CAS or atomic increment calls, which I'm not seeing, thus I'm interested in hearing your take on it.
I see you replied on comment below with:
> note that there are only one consumer and one producer
That clarify things as you don't need multi-thread coordination on reads or writes if assuming single producer and single consumer.
Exactly, that's right
Thanks! That's not ensured, optimizations are only valid due to the constraints
- One single producer thread
- One single consumer thread
- Fixed buffer capacity
So to answer
> Are they ensuring two threads can't push to the same slot nor pop the same value from the ring?
No need for this usecase :)
This is a SPSC queue -- there aren't multiple writers to coordinate, nor readers. It simplifies the design.
I had what I thought was a pretty good implementation, but I wasn't aware of the cache line bouncing. Looks like I've got some updates to make.
Glad that it helps :)
Is there a C library that I can get these data structures for free?
Random q: What was the first cpu to support atomic instructions?
I don't know but the IBM 360 and the DEC PDP-10 both had them. Those are the earliest systems I ever saw.
Super fun, def gonna try this on my own time later
Feel free to share your findings
It's lock-free because it uses ordered loads and stores, which is also how you implement locks. I find the semantic distinction unconvincing. The post is really about how slow the default STL mutex implementation is.
[dead]
[flagged]
:)
[flagged]
Thanks!
This is in C++, other languages have different atomic primitives.
Don't most people use C++11 atomics now? You have SeqCst, Release, Acquire, and Relaxed (with Consume deprecated due to the difficulty of implementing it). You can do loads, stores, and exchanges with each ordering type. Zig, Rust, and C all use the same orderings. I guess Java has its own memory model since it's been around a lot longer, but most people have standardized around C++'s design.
Which is a slight shame since Load-Linked/Store-Conditional is pretty cool, but I guess that's limited to ARM anyways, and now they've added extensions for CAS due to speed.
I've taken an interest in lock-free queues for ultra-low power embedded... think Cortex-m0, or even avr/pic.
Things get interesting when you're working with a cpu that lacks the ldrex/strem assembly instructions that makes this all work. I think youre only options at that point are disable/enable interrupts. IF anyone has any insights into this constraint I'd love to hear it.
2 replies →
LL/SC is still hinted at in the C++11 model with std::atomic<T>::compare_exchange_weak:
https://en.cppreference.com/w/cpp/atomic/atomic/compare_exch...
Really? Pretty much all atomics i’ve used have load, store of various integer sizes. I wrote a ring buffer in Go that’s very similar to the final design here using similar atomics.
https://pkg.go.dev/sync/atomic#Int64
Nice one, thanks for sharing. Do you wanna share the ring buffer code itself?
They generally map directly to concepts in the CPU architecture. On many architectures, load/store instructions are already guaranteed to be atomic as long as the address is properly aligned, so atomic load/store is just a load/store. Non-relaxed ordering may emit a variant load/store instruction or a separate barrier instruction. Compare-exchange will usually emit a compare and swap, or load-linked/store-conditional sequence. Things like atomic add/subtract often map to single instructions, or might be implemented as a compare-exchange in a loop.
The exact syntax and naming will of course differ, but any language that exposes low-level atomics at all is going to provide a pretty similar set of operations.
2 replies →
JVM has almost the same (C++ memory model was modeled after JVM one, with some subtle fixes).
Yeah, this is quite specific to C++ (at a syntax level)
Huh? Other languages that compile to machine code and offer control over struct layout and access to the machine’s atomic will work the same way.
Sure, C++ has a particular way of describing atomics in a cross-platform way, but the actual hardware operations are not specific to the language.
Yeah, different languages will have different syntaxes and ways of using atomics
But at the hardware level all are kindof the same
It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Also doesn't have fences on the store, has extra branches that shouldn't be there, and is written in really stylistically weird c++.
Maybe an llm that likes a different language more, copying a broken implementation off github? Mostly commenting because the initial replies are "best" and "lol", though I sympathise with one of those.
> It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Are we reading the same code? The stores are clearly after value accesses.
> Also doesn't have fences on the store
?? It uses acquire/release semantics seemingly correctly. Explicit fences are not required.
Push:
buffer_[head] = value;
head_.store(next_head, std::memory_order_release);
return true;
There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
6 replies →
Sorry, but that's not actually true. There are no data races, the atomics prevent that (note that there are only one consumer and one producer)
Regarding the style, it follows the "almost always auto" idea from Herb Sutter
If you enforce that the buffer size is a power of 2 you just use a mask to do the
part
7 replies →