Comment by deathanatos
3 years ago
I'm not following.
Let's say the socket is set to TCP_NODELAY, and the transfer starts at 50 KiB/s. After a couple seconds, shouldn't the application have easily outpaced the network, and buffered enough data in the kernel such that the socket's send buffer is full, and subsequent packets are able to be full? What causes the small packets to persist?
This is the question I had from the start and I'm surprised that I had to scroll this far down.
Nagle's algorithm is about what do to when the send buffer isn't full. It is supposed to improve network efficiency in exchange for some latency. Why is it affecting throughput?
Is Linux remembering the size of the send calls in the out buffer and for some reason insisting on sending packets of those sizes still? I can't imagine why it would do that. If anything it sounds like a kernel bug to me.
For large transfers it still likely makes sense to always send full packets (until the end) like TCP_CORK but it seems that it should be unnecessary in most cases.