Comment by AndrewKemendo
1 year ago
Seems like is preventing data persistence (replace, delete) was chosen over minimize bandwidth (no optimization)
But could easily do both if you wanted to - though I’m not sure it’s worth the hassle. I agree that this might struggle if used at scale on the same IP
Not only that. JPEG works best on natural-looking images, with gradients, curves, constant and wide color variation, etc. Computer screens very often show entirety different kinds of images, dominated by few flat colors, small details (like text) and sharp edges. That is, exactly by "high-frequency noise" JPEG is built to throw away.
JPEG either makes "smeared" screenshots or low-compression screenshots. PNG often works better.
A proper video codec mostly sends the small changes between frames (including shifts,like scrolling), and relatively rare key frames. It could give both a better visual quality and better bandwidth usage.
What's interesting in the "screenshot per second" solution is that it can be hacked together from common existing pieces, like imagemagic, netcat, and bash; no need to install anything. (Imagine you've got privilege-limited access to a remote box, and maybe cannot even write to disk! Oh wait...)
The problem with the JPEG vs. PNG debate for screenshots, is that screenshots can contain anything from photos to text to UI elements to frames of video.
Just open any website and you'll see text right beside photos, or text against a photographic backdrop, often in the middle of being moved around with hardware-accelerated CSS animations.
I think we need an image container format that can use different compression algorithms for different regions or "layers" of the image, and an encoder that quickly detects how to slice up a screenshot into arbitrary layers. Both should be possible with modern tech. I just hope the resulting format isn't patent-encumbered.
Completely agree. JPEG-only is insufficient. PNG-only is insufficient. An adaptive codec would apply a right algorithm to an area depending on its properties.
I suppose than the more modern video compression algorithms apply such image analysis already, to an extent. I don't know how e.g. VNC or RDP work, but it would be naural for them to have provisions like that co save bandwidth / latency, which is often in a shorter supply than computing power.
Of existing still image codecs, JPEG XL seems to have the right properties[1]: the ability to split image to areas and / or layers, and the ability to encode different areas either with DCT or losslessly. But these are capabilities of the format; I don't know how well existing encoder implementations can use them.
[1]: https://en.wikipedia.org/wiki/JPEG_XL#Technical_details
5 replies →
You are reinventing PDF.