Comment by mikepavone
17 hours ago
> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.
This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.
> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.
h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.
Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else
They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading. UDP is not necessary to write a loop.
> They shared the polling code in the article. It doesn't request another jpeg until the previous one finishes downloading.
You're right, I don't know how I managed to skip over that.
> UDP is not necessary to write a loop.
True, but this doesn't really have anything to do with using JPEG either. They basically implemented a primitive form of rate control by only allowing a single frame to be in flight at once. It was easier for them to do that using JPEG because they (to their own admission) seem to have limited control over their encode pipeline.
> have limited control over their encode pipeline.
Frustratingly this seems common in many video encoding technologies. The code is opaque, often has special kernel, GPU and hardware interfaces which are often closed source, and by the time you get to the user API (native or browser) it seems all knobs have been abstracted away and simple things like choosing which frame to use as a keyframe are impossible to do.
I had what I thought was a simple usecase for a video codec - I needed to encode two 30 frame videos as small as possible, and I knew the first 15 frames were common between the videos so I wouldn't need to encode that twice.
I couldn't find a single video codec which could do that without extensive internal surgery to save all internal state after the 15th frame.
12 replies →
So US->Australia/Asia wouldn't that limit you to 6fps or so due half-rtt? Each time a frame finishes arriving you have 150ms or so for your new request to reach.
That sounds find for most screen sharing use-case.
Probably either (1) they don't request another jpeg until they have the previous one on-screen (so everything is completely serialized and there are no frames "in-flight" ever) (2) they're doing a fresh GET for each and getting a new connection anyway (unless that kind of thing is pipelined these days? in which case it still falls back to (1) above.)
You can still get this backpressure properly even if you're doing it push-style. The TCP socket will eventually fill up its buffer and start blocking your writes. When that happens, you stop encoding new frames until the socket is able to send again.
The trick is to not buffer frames on the sender.
You probably won't get acceptable latency this way since you have no control over buffer sizes on all the boxes between you and the receiver. Buffer bloat is a real problem. That said, yeah if you're getting 30-45 seconds behind at 40 Mbps you've probably got a fair bit of sender-side buffering happening.
Related tangent: it's remarkable to me how a given jpeg can be literally visually indistinguishable from another (by a human on a decent monitor) yet consist of 10-15% as many bytes. I got pretty deep into web performance and image optimization in the late 2000s and it was gratifying to have so much low-hanging fruit.
> Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
They said playing around with bitrate didn't reduce the latency; all that happened was they got blocky videos with the latency remaining the same.
I am almost sure that the most perfect solution would involve using a video codec protocol but the issue is implementation complexity and having to implement a production encoder yourself if your usecase is unusual.
This is exactly the point of the article they tried keyframes only but their library had a bug that broke it
Regarding the encoding efficiency, I imagine the problem is that the compromise in quality shows in the space dimension (aka fewer or blurry pixels) rather than in time. Users need to read text clearly, so the compromise in the time dimension (fewer frames) sounds just fine.
Nothing stopping you from encoding h264 at a low frame rate like 5 or 10 fps. In webRTC, you can actually specify how you want to handle low bitrate situations with degredationPreference. If set to maintain-resolution, it will prefer sacrificing frame rate.