Comment by viraptor
3 years ago
> best size of buffers and flash policy can only be determined by application logic
That's not really true. The best result can be obtained by the OS, especially if you can use splice instead of explicit buffers. Or sendfile. There's way too much logic in this to expect each app to deal with this, or even things it doesn't really know about like current IO pressure, or the buffering and caching for a given attached device.
Then there are things you just can't know about. You know about your MTU for example, but won't be monitoring the changes for the given connection. The kernel knows how to scale the buffers appropriately already so it can do the flushes in a better way than the app. (If you're after throughout not latency)
> The kernel knows how to scale the buffers appropriately already so it can do the flushes in a better way than the app. (If you're after throughout not latency)
Well, how can the OS know if I'm after throughput or latency? It would be very wrong to simply assume that all or even most apps would prioritize throughput; at modern network speeds throughput often is sufficient and user experience is dominated by latency (both on consumer and server side), so as the parent post says, this policy can only be determined by application logic, since OS doesn't know about what this particular app needs with respect to throughput vs latency tradeoffs.
> how can the OS know if I'm after throughput or latency
Because you tell it by enabling / disabling buffering (Nagle).
And most apps do prefer throughput. Those that don't really know that they prefer latency.
> since OS doesn't know about what this particular app needs with respect to throughput vs latency tradeoffs.
I think you're mixing up determining what you want (app choice) with how to achieve that best (OS information). I was responding to the parent talking about flushing and buffer sizes specifically.