In your high level "You might not want to use it if" points, you mention Docker but not why, and that's odd. I happen to know why: io_uring syscalls are blocked by default in Docker, because io_uring is a large surface area for attacks, and this has proven to be a real problem in practice. Others won't know this, however. They also won't know that io_uring is similarly blocked in widely used cloud sandboxes, Android, and elsewhere. Seems like a fine place to point this stuff out: anyone considering io_uring would want to know about these issues.
Very good point! You’re absolutely right: The fact that io_uring is blocked by default in Docker and other sandboxes due to security concerns is important context, and we should have mentioned it explicitly there. We'll update the post, and happy to incorporate any other caveats you think are worth calling out.
Really excellent research and well written, congrats. Shows that io_uring really brings extra performance when properly used, and not simply as a drop-in replacement.
> With IOPOLL, completion events are polled directly from the NVMe device queue, either by the application or by the kernel SQPOLL thread (cf. Section 2), replacing
interrupt-based signaling. This removes interrupt setup and handling overhead but disables non-polled I/O, such as sockets, within
the same ring.
> Treating io_uring as a drop-in replacement in a traditional I/O-worker design is inadequate. Instead, io_uring requires a ring-per-thread design that overlaps computation and I/O within the same thread.
1) So does this mean that if you want to take advantage of IOPOLL, you should use two rings per thread: one for network and one for storage?
2) SQPoll is shown in the graph as outperforming IOPoll. I assume both polling options are mutually exclusive?
3) I'd be interested in what the considerations are (if any) for using IOPoll over SQPoll.
4) Additional question: I assume for a modern DBMS you would want to run this as thread-per core?
Thanks a lot for the kind words, we really appreciate it!
Regarding your questions:
1) Yes. If you want to take advantage of IOPOLL while still handling network I/O, you typically need two rings per thread: an IOPOLL-enabled ring for storage and a regular ring for sockets and other non-polled I/O.
2) They are not mutually exclusive. SQPOLL was enabled in addition to IOPOLL in the experiments (+SQPoll). SQPOLL affects submission, while IOPOLL changes how completions are retrieved.
3) The main trade-off is CPU usage vs. latency. SQPOLL spawns an additional kernel thread that busy spins to issue I/O requests from the ring. With IOPOLL interrupts are not used and instead the device queues are polled (this does not necessarily result in 100% CPU usage on the core).
4) Yes. For a modern DBMS, a thread-per-core model is the natural fit. Rings should not be shared between threads; each thread should have its own io_uring instance(s) to avoid synchronization and for locality.
> Figure 9: Durable writes with io_uring. Left: Writes and fsync
are issued via io_uring or manually linked in the application.
Right: Enterprise SSDs do not require fsync after writes.
This sounds strange to me, of not requiring fsync. I may be wrong, but if it was meant that Enterprise SSDs have buffers and power-failure safety modes which works fine without explicit fsync, I think it's too optimistic view here.
I suspect it’s a misunderstanding. PLP capacitors let the drive not flush writes before reporting a write completed in response to a sync request, but they don’t let the software skip making that call.
Yeah that's just flat out not correct. If you're writing through a file system or the buffer cache and you don't fsync, there is no guarantee your data will still be there after, say, a power loss or a system panic. There's no guarantee it's even been passed to the device at all when an asynchronous write returns.
However, in our experiments (including Figure 9), we bypass the page cache and issue writes using O_DIRECT to the block device. In this configuration, write completion reflects device-level persistence. For consumer SSDs without PLP, completions do not imply durability and a flush is still required.
> "When an SSD has Power-Loss Protection (PLP) -- for example, a supercapacitor that backs the write-back cache -- then the device's internal write-back cache contents are guaranteed durable even if power is lost. Because of this guarantee, the storage controller does not need to flush the cache to media in a strict ordering or slow way just to make data persistent."
(Won et al., FAST 2018)
https://www.usenix.org/system/files/conference/fast18/fast18...
We will make this more explicit in the next revision. Thanks.
The practical guidelines are useful. Basically “first prove I/O is actually your bottleneck, then change the architecture to use async/batching, and only then reach for features like fixed buffers / zero-copy / passthrough / polling.”
I'm curious, how sensitive are the results to kernel version & deployment environments?
Some folks run LTS kernels and/or containers where io_uring may be restricted by default.
Thank you for the feedback on the paper and the guidelines.
Regarding kernel versions:
io_uring is under very active development (see the plot at https://x.com/Tobias__Ziegler/status/1997256242230915366). @axboe and team regularly push new features and fixes out. Therefore, some features (e.g., zero-copy receive) require very new kernels.
Regarding deployments:
Please refer to the other discussions regarding cloud deployment and containers. Besides the discussed limitations, some features (e.g., zero-copy receive) also require up-to-date drivers, and hence, results are also sensitive to the used hardware.
Note that the three key features of io_uring (unified interface, async, batching) are available in very early kernels. As our paper demonstrates, these are the key aspects that unlock architectural (and the biggest per) improvements.
Do any cloud hosting providers make io_uring available? Or are they all blocking it due to perceived security risks? I think I have a killer use case and would love to experiment - but it’s unclear who even makes this available.
Adding to the answer from cptnntsoobv:
All major cloud providers (AWS, Azure, GCP) and also smaller providers (e.g., Hetzner) support io_uring on VMs.
We ran some tests on AWS, for example.
The experiments in the paper were done on our lab infrastructure to test io_uring on 400G NICs (which are not yet widely available in the cloud)
I guess you are thinking of iouring being disabled in dockeresque rumtimes. This restriction does not apply to a VM, specially if you can run your own kernel (e.g. EC2).
NVMe passthrough skips abstractions. To access NVMe de-
vices directly, io_uring provides the OP_URING_CMD opcode, which
issues native NVMe commands via the kernel to device queues. By
bypassing the generic storage stack, passthrough reduces software-
layer overhead and per-I/O CPU cost. This yields an additional 20%
gain, increasing throughput to 300 k tx/s (Figure 5, +Passthru).
Which userspace libraries support this?. Liburing does, but Zig's Standard Library (relevant because a tigerbeetler wrote the article) does not, just silently gives out corrupt values from the completion queue.
On the rust side, rustix_uring does not support this, but widely doesn't let you set kernel flags for something it doesn't support. tokio-rs/io-uring looks like it might from the docs, but I can't figure out how (if anyone uses it there, let me know).
Great point. It is indeed the case that most io_uring libraries are lagging behind liburing. There are two main reasons for this: (1) io_uring development is moving very quickly (see the linked figure in another comment), and (2) liburing is maintained by Axboe himself and is therefore tightly coupled to io_uring’s ongoing development. One pragmatic "solution" is to wrap liburing with FFI bindings and maintain those yourself.
I have a better way to do this. My optimizer basically finds these hot paths automatically using a type of PGO that’s driven by the real workloads and the RDBMS plug-in branches to its static output.
We also wrote up a very concise, high-level summary here, if you want the short version: https://toziegler.github.io/2025-12-08-io-uring/
In your high level "You might not want to use it if" points, you mention Docker but not why, and that's odd. I happen to know why: io_uring syscalls are blocked by default in Docker, because io_uring is a large surface area for attacks, and this has proven to be a real problem in practice. Others won't know this, however. They also won't know that io_uring is similarly blocked in widely used cloud sandboxes, Android, and elsewhere. Seems like a fine place to point this stuff out: anyone considering io_uring would want to know about these issues.
Very good point! You’re absolutely right: The fact that io_uring is blocked by default in Docker and other sandboxes due to security concerns is important context, and we should have mentioned it explicitly there. We'll update the post, and happy to incorporate any other caveats you think are worth calling out.
2 replies →
Do you know if this still applies if you run a docker container with host networking enabled?
Is this something likely to ever change?
10 replies →
Thanks! This explained to me very simply what the benefits are in a way no article I’ve read before has.
That’s great to hear! We are happy it helped.
Really excellent research and well written, congrats. Shows that io_uring really brings extra performance when properly used, and not simply as a drop-in replacement.
> With IOPOLL, completion events are polled directly from the NVMe device queue, either by the application or by the kernel SQPOLL thread (cf. Section 2), replacing interrupt-based signaling. This removes interrupt setup and handling overhead but disables non-polled I/O, such as sockets, within the same ring.
> Treating io_uring as a drop-in replacement in a traditional I/O-worker design is inadequate. Instead, io_uring requires a ring-per-thread design that overlaps computation and I/O within the same thread.
1) So does this mean that if you want to take advantage of IOPOLL, you should use two rings per thread: one for network and one for storage?
2) SQPoll is shown in the graph as outperforming IOPoll. I assume both polling options are mutually exclusive?
3) I'd be interested in what the considerations are (if any) for using IOPoll over SQPoll.
4) Additional question: I assume for a modern DBMS you would want to run this as thread-per core?
Thanks a lot for the kind words, we really appreciate it!
Regarding your questions:
1) Yes. If you want to take advantage of IOPOLL while still handling network I/O, you typically need two rings per thread: an IOPOLL-enabled ring for storage and a regular ring for sockets and other non-polled I/O.
2) They are not mutually exclusive. SQPOLL was enabled in addition to IOPOLL in the experiments (+SQPoll). SQPOLL affects submission, while IOPOLL changes how completions are retrieved.
3) The main trade-off is CPU usage vs. latency. SQPOLL spawns an additional kernel thread that busy spins to issue I/O requests from the ring. With IOPOLL interrupts are not used and instead the device queues are polled (this does not necessarily result in 100% CPU usage on the core).
4) Yes. For a modern DBMS, a thread-per-core model is the natural fit. Rings should not be shared between threads; each thread should have its own io_uring instance(s) to avoid synchronization and for locality.
> Figure 9: Durable writes with io_uring. Left: Writes and fsync are issued via io_uring or manually linked in the application. Right: Enterprise SSDs do not require fsync after writes.
This sounds strange to me, of not requiring fsync. I may be wrong, but if it was meant that Enterprise SSDs have buffers and power-failure safety modes which works fine without explicit fsync, I think it's too optimistic view here.
I suspect it’s a misunderstanding. PLP capacitors let the drive not flush writes before reporting a write completed in response to a sync request, but they don’t let the software skip making that call.
Yeah that's just flat out not correct. If you're writing through a file system or the buffer cache and you don't fsync, there is no guarantee your data will still be there after, say, a power loss or a system panic. There's no guarantee it's even been passed to the device at all when an asynchronous write returns.
Yes, for file systems these statements are true.
However, in our experiments (including Figure 9), we bypass the page cache and issue writes using O_DIRECT to the block device. In this configuration, write completion reflects device-level persistence. For consumer SSDs without PLP, completions do not imply durability and a flush is still required.
> "When an SSD has Power-Loss Protection (PLP) -- for example, a supercapacitor that backs the write-back cache -- then the device's internal write-back cache contents are guaranteed durable even if power is lost. Because of this guarantee, the storage controller does not need to flush the cache to media in a strict ordering or slow way just to make data persistent." (Won et al., FAST 2018) https://www.usenix.org/system/files/conference/fast18/fast18...
We will make this more explicit in the next revision. Thanks.
Really nice paper.
The practical guidelines are useful. Basically “first prove I/O is actually your bottleneck, then change the architecture to use async/batching, and only then reach for features like fixed buffers / zero-copy / passthrough / polling.”
I'm curious, how sensitive are the results to kernel version & deployment environments? Some folks run LTS kernels and/or containers where io_uring may be restricted by default.
Thank you for the feedback on the paper and the guidelines. Regarding kernel versions: io_uring is under very active development (see the plot at https://x.com/Tobias__Ziegler/status/1997256242230915366). @axboe and team regularly push new features and fixes out. Therefore, some features (e.g., zero-copy receive) require very new kernels. Regarding deployments: Please refer to the other discussions regarding cloud deployment and containers. Besides the discussed limitations, some features (e.g., zero-copy receive) also require up-to-date drivers, and hence, results are also sensitive to the used hardware. Note that the three key features of io_uring (unified interface, async, batching) are available in very early kernels. As our paper demonstrates, these are the key aspects that unlock architectural (and the biggest per) improvements.
I just today realized io_uring is meant to be read as "I.O.U. Ring" which perfectly describes how it works.
If it was meant to be read that way, it should have been named iou_ring.
Are you sure it's not I.O. µ (micro) ring?
The u is for userspace.
I/O Userspace Ring (ring buffers)
Small nitpick: malloc is not a system call.
That jumped out at me too. It stood out amid the (excellent) performance model validation.
Good catch! We will fix this in the next version and change it to brk/sbrk or mmap
Hi, I am one of the authors. Happy to take questions.
Do any cloud hosting providers make io_uring available? Or are they all blocking it due to perceived security risks? I think I have a killer use case and would love to experiment - but it’s unclear who even makes this available.
Adding to the answer from cptnntsoobv: All major cloud providers (AWS, Azure, GCP) and also smaller providers (e.g., Hetzner) support io_uring on VMs. We ran some tests on AWS, for example. The experiments in the paper were done on our lab infrastructure to test io_uring on 400G NICs (which are not yet widely available in the cloud)
Short answer: yes.
I guess you are thinking of iouring being disabled in dockeresque rumtimes. This restriction does not apply to a VM, specially if you can run your own kernel (e.g. EC2).
Experiment away!
Great paper! Love the easy to understand explanations and detailed graphs :)
NVMe passthrough skips abstractions. To access NVMe de- vices directly, io_uring provides the OP_URING_CMD opcode, which issues native NVMe commands via the kernel to device queues. By bypassing the generic storage stack, passthrough reduces software- layer overhead and per-I/O CPU cost. This yields an additional 20% gain, increasing throughput to 300 k tx/s (Figure 5, +Passthru).
Which userspace libraries support this?. Liburing does, but Zig's Standard Library (relevant because a tigerbeetler wrote the article) does not, just silently gives out corrupt values from the completion queue.
On the rust side, rustix_uring does not support this, but widely doesn't let you set kernel flags for something it doesn't support. tokio-rs/io-uring looks like it might from the docs, but I can't figure out how (if anyone uses it there, let me know).
Great point. It is indeed the case that most io_uring libraries are lagging behind liburing. There are two main reasons for this: (1) io_uring development is moving very quickly (see the linked figure in another comment), and (2) liburing is maintained by Axboe himself and is therefore tightly coupled to io_uring’s ongoing development. One pragmatic "solution" is to wrap liburing with FFI bindings and maintain those yourself.
That's what liburing_rs[1] is doing.
I am foolishly trying to make pure rust bindings myself [2].
[1] https://docs.rs/axboe-liburing/
[2] https://docs.rs/hringas
But yeah the liburing test suite is massive, way more than anyone else has.
2 replies →
This is one of the most easy-to-follow papers on io_uring and its benefits. Good work!
Thank you for the feedback, glad to hear that!
I have a better way to do this. My optimizer basically finds these hot paths automatically using a type of PGO that’s driven by the real workloads and the RDBMS plug-in branches to its static output.