Comment by sph
3 years ago
Any kernel engineer reading that can explain why TCP_QUICKACK isn't enabled by default? Maybe it's time to turn it on by default, if it was just a workaround for old terminals.
3 years ago
Any kernel engineer reading that can explain why TCP_QUICKACK isn't enabled by default? Maybe it's time to turn it on by default, if it was just a workaround for old terminals.
Enabling it will lead to more ACK packets being sent, which leads to lower efficiency of TCP (the stack spends time in processing ACK packets) and lower link utilization (these packets also need space somewhere).
My thought is that the behavior is probably correct by default, since a receiver without knowledge of the application protocol is not able to know whether follow-up data will immediately, and therefore not able to decide whether it should send an ACK or wait for more data. It could wait for a signal from userspace to send that ACK - which is exactly what QUICKACK is doing - but that comes with the drawback of now needing an extra syscall per read.
On the sender side the problem seems solvable more efficiently. If one aggregates data in the application, and just sends as everything at once using an explicit flush signal (either using CORKing APIs or enabling TCP_NODELAY), no extra syscall is required while minimal latency can be maintained.
However I think it might be a good question on whether the delayed ACK periods are still the best choices for the modern internet, or whether much smaller delays (e.g. 5ms, or something along a fraction of the RTT) could be helpful.