Comment by pclmulqdq
3 years ago
Clients having delayed acks has a very good reason: those ACKs cost data, and clients tend to have much higher download bandwidth than upload bandwidth. Really, clients should probably be delaying acks and nagling packets, while servers should probably be doing neither.
Clients should not be nagling unless the connection is emitting tiny bytes at high frequency. But that's a very odd thing to do, and in most/all cases there's some reasonable buffering occuring higher up in the stack that the nagle's algorithm will only add overhead to. Making things worse are tcp-within-tcp things like http/2.
Nagle's algorithm works great for things like telnet but should not be applied as a default to general purpose networking.
Why would Nagles algorithm add delay to “reasonable buffering up the stack”? Assuming that buffering is resulting in writes to the TCP stack greater than the packet size, Nagles algorithm won’t add any delay.
The only place where Nagles algorithm adds delay is when your doing many tiny writes to a socket, which is exactly the situation you believe Nagles should be applied to.
The size of an ACK is minuscule (40 bytes) compared to any reasonable packet size (usually around 1400 bytes).
In most client situations where you have high down bandwidth, but limited up, that suggests the vast majority of data is heading towards the client, and client isn’t sending much outbound. In which case your client may end up delaying every ACK to maximum timeout, simply because it doesn’t often send reply data in response to a server response.
HTTP is clear example of this. Client issues a request to the server, server replies. Client accepts rely, but never sends any further data to the server. In this case, delaying the client ACK is just a waste of time.