← Back to context

Comment by RugnirViking

3 years ago

my goodness. It (git-lfs, which triggered thus investigation) essentially insists on sending each packet as a tiny individual packet (resulting in umpteen thousands) instead of using the internet's built-in packet batching system (nagle's algorithm)

I believe it just emits at least one packet on each system 'write' call. As long as your 'write' invocations are larger blocks then I'd expect you'd see very little difference with O_NDELAY enabled or disabled. I've always assumed you want to limit system calls so I'd always assumed it to be better practice to encode to a buffer and invoke 'write' on larger blocks. So this feels like a combination of issues.

Regardless, overriding a socket parameter like this should be well documented by Golang if that's the desired intent.

If you want to buffer, you can still buffer. There’s no advantage letting the OS do it, and decades of documented disadvantages.

Whether this is the right or wrong thing depends 100% on what you’re trying to do. For many applications you want to send your message immediately because your next message depends on the response.

  • Very rarely this is the case. From the application’s perspective yes. From a packet perspective… no. The interface is going to send packets and they’ll end up in a few buffers after going through some wires. If something goes wrong along the way, they’ll be retransmitted. But the packets don’t care about the response, except an acknowledgment the packets were received. If you send 4000 byte messages when the MTU is 9000, you’re wasting perfectly good capacity. If you had Nagle’s turned on, you’d send one 8040 byte packet. With Nagle’s you don’t have to worry about the MTU, you write your data to the kernel and the rest is magically handled for you.