← Back to context

Comment by amiga386

5 hours ago

> And in the case of many others, it makes a very significant difference.

This is very true, but we're talking about an entertainment provider's choice of codec for streaming to millions of subscribers.

A security recording device's choice of codec ought to be very different, perhaps even regulated to exclude codecs which could "hallucinate" high-definition detail not present in the raw camera data, and the limitations of the recording media need to be understood by law enforcement. We've had similar problems since the introduction of tape recorders, VHS and so on, they always need to be worked out. Even the phantom of Helibronn (https://en.wikipedia.org/wiki/Phantom_of_Heilbronn) turned out to be DNA contamination of swabs by someone who worked for the swab manufacturer.

I don't understand why it needs to be a part of the codec. Can't Netflix use relatively low bitrate/resolution AV1 and then use AI to upscale or add back detail in the player? Why is this something we want to do in the codec and therefore set in stone with standard bodies and hardware implementations?

  • We're currently indulging a hypothetical, the idea of AI being used to either improve the quality of streamed video, or provide the same quality with a lower bitrate, so the focus is what would both ends of the codec could agree on.

    The coding side of "codec" needs to know what the decoding side would add back in (the hypothetical AI upscaling), so it knows where it can skimp and get a good "AI" result anyway, versus where it has to be generous in allocating bits because the "AI" hallucinates too badly to meet the quality requirements. You'd also want it specified, so that any encoding displays the same on any decoder, and you'd want it in hardware as most devices that display video rely on dedicated decoders to play it at full frame rate and/or not drain their battery. It it's not in hardware, it's not going to be adopted. It is possible to have different encodings, so a "baseline" encoding could leave out the AI upscaler, at the cost of needing a higher bitrate to maintain quality, or switching to a lower quality if bitrate isn't there.

    Separating out codec from upscaler, and having a deliberately low-resolution / low-bitrate stream be naively "AI upscaled" would, IMHO, look like shit. It's already a trend in computer games to render at lower resolution and have dedicated graphics card hardware "AI upscale" (DLSS, FSR, XeSS, PSSR), because 4k resolutions are just too much work to render modern graphics consistently at 60fps. But the result, IMHO, noticibly and distractingly glitches and errors all the time.