← Back to context

Comment by nine_k

1 year ago

Completely agree. JPEG-only is insufficient. PNG-only is insufficient. An adaptive codec would apply a right algorithm to an area depending on its properties.

I suppose than the more modern video compression algorithms apply such image analysis already, to an extent. I don't know how e.g. VNC or RDP work, but it would be naural for them to have provisions like that co save bandwidth / latency, which is often in a shorter supply than computing power.

Of existing still image codecs, JPEG XL seems to have the right properties[1]: the ability to split image to areas and / or layers, and the ability to encode different areas either with DCT or losslessly. But these are capabilities of the format; I don't know how well existing encoder implementations can use them.

[1]: https://en.wikipedia.org/wiki/JPEG_XL#Technical_details

> how RDP work

Uses a combination of different tech [0]. MS-RDPBCGR is at the base of it all, sort of like the main event loop [1]. MS-RDPEGDI looks into the actual drawing commands and optimizes them on the fly [2]. Then there's the MS-RDPEDC for desktop composition optimizations [3]. Also a bunch of other bits and pieces, like MS-RDPRFX which uses lossy compression optimization [4].

In RDP you don't get to play only with the bitmap or image stream data, but the actual interactions that are happening on the screen. You could say for example that the user right clicked a desktop item. Now send and render only the pop-up menu for this, and track and draw the mouse actions inside that "region" only.

[0] https://learn.microsoft.com/en-us/openspecs/windows_protocol... [1] https://learn.microsoft.com/en-us/openspecs/windows_protocol... [2] https://learn.microsoft.com/en-us/openspecs/windows_protocol... [3] https://learn.microsoft.com/en-us/openspecs/windows_protocol... [4] https://learn.microsoft.com/en-us/openspecs/windows_protocol...

The state of the art here is really Parsec, Moonlight, and Apple's "High Performance Screen Sharing" [0]. All three of these use hardware-accelerated HEVC in some UDP encapsulation. Under the right network conditions, they achieve very crisp text, 4K60 4:4:4 with low latency.

[0]: https://support.apple.com/guide/mac-help/screen-sharing-type...

  • Are you suggesting that HEVC can adapt its compression for different regions of the same frame similar to JPEG-XL? I don't think this is possible but I would love to be proven wrong.

    • Yep, this is achieved using slices, which can be arbitrary regions of the frame. Each slice can have its own quantization parameters (ranging from highly lossy to perceptually lossless). Each slice can also switch between intraframe prediction (more like still image encoding) and interframe prediction (relative to prior frames).

      So, with this, you can have high-quality static text in one region of the frame while there is lossy motion encoding (e.g. for an animating UI element) in another region of the frame.

      1 reply →