Comment by Ferret7446
3 years ago
> I prefer reliability over latency, always.
I imagine all the engineers who serve millions/billions of requests per second disagree with adding 200ms to each request, especially since their datacenter networks are reliable.
> Send your chunk and let Nagle optimize it.
Or you could buffer yourself and save dozens/hundreds of expensive syscalls. If adding buffering makes your code unreadable, your code has bigger maintainability problems.
I’ve done quite a bit of testing on my shitty network (plus a test bench using Docker and plumba) in the last 24 hours — I’m not finished so take the rest of this with a grain of salt. There will be a blog post about this in the near future… once I finish the analysis.
Random connection resets are much more likely when disabling Nagle’s algorithm. As in 2-4x more likely, especially with larger payloads. Most devs just see “latency bad” without considering the other benefits of Nagle: you won’t send a packet until you receive an ACK or the packet is full. On poor networks, you always see terrible latency (even with Nagle’s disabled, 200-500ms is the norm) and with Nagle’s the throughput is a bit higher than without, even with proper buffering on the application side.