Comment by raggi
17 days ago
> which seems too high for a system call anyways
spectre & meltdown.
> you get 2 us of processing in addition to a full system call to reach 4 Gb/s
TCP has route binding, UDP does not (connect(2) helps one side, but not both sides).
> “But encryption is slow.” Not that slow.
Encryption _is slow_ for small PDUs, at least the common constructions we're currently using. Everyone's essentially been optimizing for and benchmarking TCP with large frames.
If you hot loop the state as the micro-benchmarks do you can do better, but you still see a very visible cost of state setup that only starts to amortize decently well above 1024 byte payloads. Eradicate a bunch of cache efficiency by removing the tightness of the loop and this amortization boundary shifts quite far to the right, up into tens of kilobytes.
---
All of the above, plus the additional framing overheads come into play. Hell even the OOB data blocks are quite expensive to actually validate, it's not a good API to fix this problem, it's just the API we have shoved over bsd sockets.
And we haven't even gotten to buffer constraints and contention yet, but the default UDP buffer memory available on most systems is woefully inadequate for these use cases today. TCP buffers were scaled over time, but UDP buffers basically never were, they're still conservative values from the late 90s/00s really.
The API we really need for this kind of UDP setup is one where you can do something like fork the fd, connect(2) it with a full route bind, and then fix the RSS/XSS challenges that come from this splitting. After that we need a submission queue API rather than another bsd sockets ioctl style mess (uring, rio, etc). Sadly none of this is portable.
On the crypto side there are KDF approaches which can remove a lot of the state cost involved, it's not popular but some vendors are very taken with PSP for this reason - but PSP becoming more well known or used was largely suppressed by its various rejections in the ietf and in linux. Vendors doing scale tests with it have clear numbers though, under high concurrency you can scale this much better than the common tls or tls like constructions.
> spectre & meltdown.
I just measured. On my Ryzen 7 9700X, with Linux 6.12, it's about 50ns to call syscall(__NR_gettimeofday). Even post-spectre, entering the kernel isn't so expensive.
Are you sure that system call actually enters the kernel mode? It might be one of the special ones where kernel serves it from user space, forgot their name.
Those are only served from userspace if you call the libc wrappers. The syscall() function bypasses the wrappers.
VDSO
1 reply →
If it isn't a vDSO call, I think 50ns figure shouldn't be possible.
No need to guess, it's 10 lines of code. And you can use bpftrace to watch the test program enter the kernel.
Using the libc wrapper will use the vdso. Using syscall() will enter the kernel.
I haven't measured, but calling the vdso should be closer to 5ns.
Someone else did more detailed measurements here:
https://arkanis.de/weblog/2017-01-05-measurements-of-system-...
15 replies →
I think you are just agreeing with me?
You are basically saying: “It is slow because of all these system/protocol decisions that mismatch what you need to get high performance out of the primitives.”
Which is my point. They are leaving, by my estimation, 10-20x performance on the floor due to external factors. They might be “fast given that they are bottlenecked by low performance systems”, which is good as their piece is not the bottleneck, but they are not objectively “fast” as the primitives can be configured to solve a substantially similar problem dramatically faster if integrated correctly.
> I think you are just agreeing with me?
sure, i mean i have no goal of alignment or misalignment, i'm just trying to provide more insights into what's going on based on my observations of this from having also worked on this udp path.
> Which is my point. They are leaving, by my estimation, 10-20x performance on the floor due to external factors. They might be “fast given that they are bottlenecked by low performance systems”, which is good as their piece is not the bottleneck, but they are not objectively “fast” as the primitives can be configured to solve a substantially similar problem dramatically faster if integrated correctly.
yes, though this basically means we're talking about throwing out chunks of the os, the crypto design, the protocol, and a whole lot of tuning at each layer.
the only vendor in a good position to do this is apple (being the only vendor that owns every involved layer in a single product chain), and they're failing to do so as well.
the alternative is a long old road, where folks make articles like this from time to time, we share our experiences and hope that someone is inspired enough reading it to be sniped into making incremental progress. it'd be truly fantastic if we sniped a group with the vigor and drive that the mptcp folks seem to have, as they've managed to do an unusually broad and deep push across a similar set of layered challenges (though still in progress).