← Back to context

Comment by amiga386

6 hours ago

We're currently indulging a hypothetical, the idea of AI being used to either improve the quality of streamed video, or provide the same quality with a lower bitrate, so the focus is what would both ends of the codec could agree on.

The coding side of "codec" needs to know what the decoding side would add back in (the hypothetical AI upscaling), so it knows where it can skimp and get a good "AI" result anyway, versus where it has to be generous in allocating bits because the "AI" hallucinates too badly to meet the quality requirements. You'd also want it specified, so that any encoding displays the same on any decoder, and you'd want it in hardware as most devices that display video rely on dedicated decoders to play it at full frame rate and/or not drain their battery. It it's not in hardware, it's not going to be adopted. It is possible to have different encodings, so a "baseline" encoding could leave out the AI upscaler, at the cost of needing a higher bitrate to maintain quality, or switching to a lower quality if bitrate isn't there.

Separating out codec from upscaler, and having a deliberately low-resolution / low-bitrate stream be naively "AI upscaled" would, IMHO, look like shit. It's already a trend in computer games to render at lower resolution and have dedicated graphics card hardware "AI upscale" (DLSS, FSR, XeSS, PSSR), because 4k resolutions are just too much work to render modern graphics consistently at 60fps. But the result, IMHO, noticibly and distractingly glitches and errors all the time.