Comment by cat_plus_plus
3 years ago
Modern programming does buffering on class level rather than system call level. Even if NAGLE solves the problem of sending lots of tiny packets, it doesn't solve the problem of making many inefficient system calls. Plus, best size of buffers and flash policy can only be determined by application logic. If I want smart lights to pulse in sync with music heard by a microphone, delaying to optimize network bandwidth makes no sense. So providing raw interface with well defined behavior by default and taking care of things like buffering in wrapper classes is the right thing to do.
> best size of buffers and flash policy can only be determined by application logic
That's not really true. The best result can be obtained by the OS, especially if you can use splice instead of explicit buffers. Or sendfile. There's way too much logic in this to expect each app to deal with this, or even things it doesn't really know about like current IO pressure, or the buffering and caching for a given attached device.
Then there are things you just can't know about. You know about your MTU for example, but won't be monitoring the changes for the given connection. The kernel knows how to scale the buffers appropriately already so it can do the flushes in a better way than the app. (If you're after throughout not latency)
> The kernel knows how to scale the buffers appropriately already so it can do the flushes in a better way than the app. (If you're after throughout not latency)
Well, how can the OS know if I'm after throughput or latency? It would be very wrong to simply assume that all or even most apps would prioritize throughput; at modern network speeds throughput often is sufficient and user experience is dominated by latency (both on consumer and server side), so as the parent post says, this policy can only be determined by application logic, since OS doesn't know about what this particular app needs with respect to throughput vs latency tradeoffs.
> how can the OS know if I'm after throughput or latency
Because you tell it by enabling / disabling buffering (Nagle).
And most apps do prefer throughput. Those that don't really know that they prefer latency.
> since OS doesn't know about what this particular app needs with respect to throughput vs latency tradeoffs.
I think you're mixing up determining what you want (app choice) with how to achieve that best (OS information). I was responding to the parent talking about flushing and buffer sizes specifically.
I kind of wonder if these applications are forced to do their own buffering because they have disabled Nagle's algorithm?
The old adage about people who attempt to attempt to avoid TCP end up reinventing TCP and re-learning the lessons from the 70s...
You missed the part about many inefficient system calls. You want buffering to happen before the thing that has a relatively high per-call overhead.
If you want smart lights to pulse in sync with your microphone you shouldn’t be using TCP in the first place, here UDP is a lot more suitable.
TCP is reconstructing the order, meaning a glitch of a single packet will propagate as delay for following packets, in worst case accumulate into a big congestion.
I talked a bit about that in the post. When you know the network is reliable, it’s a non-issue. When you need to send a few small packets, disable Nagles. When you need to send a bunch of tiny packets across an unknown network (aka the internet) use Nagles.