Comment by notatoad
2 months ago
video decoding on a general-purpose cpu is difficult, so most devices that can play video include some sort of hardware video decoding chip. if you want your video to play well, you need to deliver it in a format that can be decoded by that chip, on all the devices that you want to serve.
so it takes a long time to transition to a new codec - new devices need to ship with support for your new codec, and then you have to wait until old devices get lifecycled out before you can fully drop support for old codecs.
To this day no AppleTV boxes support hardware AV1 decode (which essentially means it’s not supported). Only the latest Roku Ultra devices support it. So obviously Netflix, for example, can’t switch everyone over to AV1 even if they want to.
These days, even phone-class CPUs can decode 4k video at playback rate, but they use a lot of power doing it. Not reasonable for battery-powered devices. For AC-powered devices, the problem might be heat dissipation, particularly for little streaming boxes with only passive cooling.
Would it be possible to just ship video streaming devices with a FPGA that can be updated to support whatever hardware accelerated codec is fashionable?
probably not at the prices that video streaming devices typically sell for.
I think the need for hardware decoding stinks because it makes capable hardware obsolete since it can't decode new video.
Hardware acceleration has been a thing since...forever. Video in general is a balancing act between storage, bandwidth, and quality. Video playback on computers is a balancing act between storage, bandwidth, power, and cost.
Video is naturally large. You've got all the pixels in a frame, tens of frames every second, and however many bits per pixel. All those frames need to be decoded and displayed in order and within fixed time constraints. If you drop frames or deliver them slowly no one is happy watching the video.
If at any point you stick to video that can be effectively decoded on a general purpose CPU with no acceleration you're never going to keep up with the demands of actual users. It's also going to use a lot more power than an ASIC that is purpose-built to decode the video. If you decide to use the beefiest CPU in order to handle higher quality video under some power envelope your costs are going to increase making the whole venture untenable.
I hear you but I think the benefits fall mainly on streaming platforms rather than users.
Like I'm sure Netflix will lower their prices and Twitch will show fewer ads to pass the bandwidth savings onto us right?
1 reply →