Comment by eichin

2 months ago

Probably either (1) they don't request another jpeg until they have the previous one on-screen (so everything is completely serialized and there are no frames "in-flight" ever) (2) they're doing a fresh GET for each and getting a new connection anyway (unless that kind of thing is pipelined these days? in which case it still falls back to (1) above.)

You can still get this backpressure properly even if you're doing it push-style. The TCP socket will eventually fill up its buffer and start blocking your writes. When that happens, you stop encoding new frames until the socket is able to send again.

The trick is to not buffer frames on the sender.

  • You probably won't get acceptable latency this way since you have no control over buffer sizes on all the boxes between you and the receiver. Buffer bloat is a real problem. That said, yeah if you're getting 30-45 seconds behind at 40 Mbps you've probably got a fair bit of sender-side buffering happening.

    • > you have no control over buffer sizes on all the boxes between you and the receiver

      You certainly do; the amount of data buffered can never be larger than the actual number of bytes you've sent out. Bufferbloat happens when you send too much stuff at once and nothing (typically the candidate to do so would be either the congestion window or some intermediate buffer) stops it from piling up in an intermediate buffer. If you just send less from userspace in the first place (which isn't a good thing to do for e.g. a typical web server, but _can_ be for this kind of video conference-like application), it can't pile up anywhere.

      (You could argue that strictly speaking, you have no control over the _buffer_ sizes, but that doesn't matter in practice if you're bounding the _buffered data_ sizes.)