Yet I still only got hardware support for it on my first devices last year. The downside of "rapid" iteration on video codecs is that content needs to always be stored in multiple formats (or alternatively battery life on the client suffers from software playback, which is the route e.g. Youtube seems to be preferring).
Hopefully that improves. The guy giving the presentation on AV2 made clear there was "rigorous scrutiny for hardware decoding complexity", and they were advised by Realtek and AMD on this.
So it seems like they checked that all their ideas could be implemented efficiently in hardware as they went along, with advice from real hardware producers.
Hopefully AV2-capable hardware will appear much quicker than AV1-capable hardware did.
Oh, I don't doubt that it'll be hardware implementable, but it's a shame that current hardware is usually mostly out of luck with new codecs. (Sometimes parts can be reused in more programmable/compartmentalized decoding pipelines, but I haven't seen that often.)
I have to wonder whether PCIe devices that do hardware encoding / decoding might be the more viable path going forward?
Wait, I just discovered GPUs, nevermind. [giggles]
Still, the ability to do specialized work should probably be offloaded to specialized but pluggable hardware. I wonder what the economics of this would be...
It'd be really cool if we had 'upgradable codec FPGAs' in our machines that you could just use flash to the newest codec... but that'd probably be noticeably more expensive, and also not really in the interest of the manufacturers, who want to have reasons to sell new chips.
Back in ~2004, I worked on a project to define a codec virtual machine, with the goal of each file being able to reference the standard it was encoded against, along with a link to a reference decoder built for that VM. My thought was that you could compile that codec for the system you were running on and decode in software, or if a sufficient DSP or FPGA was available, target that.
While it worked, I don't think it ever left my machine. Never moved past software decoding -- I was a broke teen with no access to non-standard hardware. But the idea has stuck with me and feels more relevant than ever, with the proliferation of codecs we're seeing now.
It has the Sufficiently Smart Compiler problem baked in, but I tried to define things to be SIMD-native from the start (which could be split however it needed to be for the hardware) and I suspect it could work. Somehow.
> FPGAs' in our machines that you could just use flash to the newest codec
They're called GPUs... They're ASICs rather than FPGAs, but it's easy to update the driver software to handle new video codecs. The difficulty is motivating GPU manufacturers to do so... They'd rather sell you a new one with newer codec support as a feature.
You can't do this because GPUs are parallel and decompression cannot be parallel. If there's any parallelism it means it's not compressed as much as it could be.
Yet I still only got hardware support for it on my first devices last year. The downside of "rapid" iteration on video codecs is that content needs to always be stored in multiple formats (or alternatively battery life on the client suffers from software playback, which is the route e.g. Youtube seems to be preferring).
Hopefully that improves. The guy giving the presentation on AV2 made clear there was "rigorous scrutiny for hardware decoding complexity", and they were advised by Realtek and AMD on this.
So it seems like they checked that all their ideas could be implemented efficiently in hardware as they went along, with advice from real hardware producers.
Hopefully AV2-capable hardware will appear much quicker than AV1-capable hardware did.
Oh, I don't doubt that it'll be hardware implementable, but it's a shame that current hardware is usually mostly out of luck with new codecs. (Sometimes parts can be reused in more programmable/compartmentalized decoding pipelines, but I haven't seen that often.)
1 reply →
Maybe not only reference software, but also reference RTL should be provided? Yes, this is more work, but should speed up adoption immensely.
11 replies →
I have to wonder whether PCIe devices that do hardware encoding / decoding might be the more viable path going forward?
Wait, I just discovered GPUs, nevermind. [giggles]
Still, the ability to do specialized work should probably be offloaded to specialized but pluggable hardware. I wonder what the economics of this would be...
It'd be really cool if we had 'upgradable codec FPGAs' in our machines that you could just use flash to the newest codec... but that'd probably be noticeably more expensive, and also not really in the interest of the manufacturers, who want to have reasons to sell new chips.
Back in ~2004, I worked on a project to define a codec virtual machine, with the goal of each file being able to reference the standard it was encoded against, along with a link to a reference decoder built for that VM. My thought was that you could compile that codec for the system you were running on and decode in software, or if a sufficient DSP or FPGA was available, target that.
While it worked, I don't think it ever left my machine. Never moved past software decoding -- I was a broke teen with no access to non-standard hardware. But the idea has stuck with me and feels more relevant than ever, with the proliferation of codecs we're seeing now.
It has the Sufficiently Smart Compiler problem baked in, but I tried to define things to be SIMD-native from the start (which could be split however it needed to be for the hardware) and I suspect it could work. Somehow.
> FPGAs' in our machines that you could just use flash to the newest codec
They're called GPUs... They're ASICs rather than FPGAs, but it's easy to update the driver software to handle new video codecs. The difficulty is motivating GPU manufacturers to do so... They'd rather sell you a new one with newer codec support as a feature.
3 replies →
The main delay the last time was corporations being dicks about IP but the two main culprits have got on board this time.
Unless they create a codec that GPUs are naturally good at, we will inevitably always be a couple of hardware cycles behind.
You can't do this because GPUs are parallel and decompression cannot be parallel. If there's any parallelism it means it's not compressed as much as it could be.
It’s insane to a point I am very skeptic.
If true, that would be amazing.