Comment by nospice
2 days ago
To folks who wax lyrical about FPGAs: why do they need a future?
I agree with another commenter: I think there are parallels to "the bitter lesson" here. There's little reason for specialized solutions when increasingly capable general-purpose platforms are getting faster, cheaper, and more energy efficient with every passing month. Another software engineering analogy is that you almost never need to write in assembly because higher-level languages are pretty amazing. Don't get me wrong, when you need assembly, you need assembly. But I'm not wishing for an assembly programming renaissance, because what would be the point of that?
FPGAs were a niche solution when they first came out, and they're arguably even more niche now. Most people don't need to learn about them and we don't need to make them ubiquitous and cheap.
I can't say I agree with you here, if anything FPGAs and general purpose microprocessors go hand in hand. It would be an absolute game changer to be able to literally download hardware acceleration for a new video codec or encryption algorithm. Currently this is all handled by fixed function silicon which rapidly becomes obsolete. AV1 support is only just now appearing in mainstream chips after almost 8 years, and soon AV2 will be out and the cycle will repeat.
This is such a severe problem that even now, (20+ year old) H.264 is the only codec that you can safely assume every end-user will be able to play, and H.264 consumes 2x (if not more) bandwidth compared to modern codecs at the same perceived image quality. There are still large subsets of users that cannot play any codecs newer than this without falling back to (heavy and power intensive) software decoding. Being able to simply load a new video codec into hardware would be revolutionary, and that's only one possible use case.
> It would be an absolute game changer to be able to literally download hardware acceleration for a new video codec or encryption algorithm
That relies on "FPGAs everywhere", which is much further out than "GPUs everywhere".
I'm not sure where the state of the art is on this, but given the way that codecs work - bitstream splitting into tiles, each of which is numerically heavy but can be handled separately - how is development of hybrid codecs, where the GPU does the heavy lifting using its general purpose cores rather than fixed function decoder pipeline?
But why would it be amazing? The alternative right now is that you do it in software and just dedicate a couple of cores to the task (or even just put in a separate $2 chip to run the decoder).
Like, I get the aesthetic appeal, and I accept that there is a small subset of uses where an FPGA really makes a difference. But in the general case, it's a bit like getting upset at people for using an MCU when a 555 timer would do. Sure, except doing it the "right" way is actually slower, more expensive, and less flexible, so why bother?
Battery powered or thermally constrained devices.
3 replies →
And you think that a downloaded codec on an FPGA would perform anywhere close to custom silicon? Because it won't; configurability comes at a steep cost.
FPGAs are more like CGRAs these days. With the right DSP units, it could absolutely be competitive with custom silicon.
1 reply →