Comment by d-moon
5 days ago
As someone who's worked at Xilinx before and after the merger, it's a surprise they were even able to sell it for that much. Altera has been noncompetitive to Xilinx in performance and to Lattice in terms of low-end/low-power offerings for at least the last 2 generations.
I'm concerned about the future of FPGAs and wonder who will lead the way to fix these abhorrent toolchains these FPGA companies force upon developers.
>wonder who will lead the way to fix these abhorrent toolchains these FPGA companies force upon developers.
Some FPGA vendors are contributing to and relying, partially or completely, on the open source stack (mainly yosys+nextpnr).
It is still perceived as not being "as good" as the universally hated proprietary tools, but it's getting there.
This has piqued my interest, which vendors are using an open source stack?
Platypus[0] and QuickLogic[1] I am aware of.
0. https://ir.quicklogic.com/press-releases/detail/657/quicklog...
1. https://www.designnews.com/semiconductors-chips/is-platypus-...
Yeah I personally wondered if AMD was just copying Intel, because apparently every CPU manufacturer also needs to manufacture FPGAs, or they actually have a long term strategy where it is essential for both the FPGA and CPU departments to cooperate.
I think Xilinx did a fine job with their AI Engines and AMD decided to integrate a machine learning focused variant on their laptops as a result. The design of the intel NPU is nowhere near as good as AMD's. I have to say that AMD is not a software company though and while the hardware is interesting, their software support is nonexistent.
Also, if you're worried about FPGAs that doesn't really make much sense, since Effinix is killing it.
I briefly hoped that, like the integration of GPUs, there would be a broader integration of programmable logic in general purpose CPUs, with AMD integrating Xilinx fabric and Intel integrating Altera fabric. But I could never imagine a real use case and apparently there wasn't a marketable enough one either. Something like high-level synthesis ending up like CUDA always seemed like it would present a neat development environment for certain optimizations.
I wanted that, too. Then, integrated with something like Synflow:
https://www.synflow.com/
Agree on both. As things like the PIO on the rp line of micros gets more common, micros will have IO that can match FPGAs. For low end, micros are generally good enough or gain NPU compute cores. It’s the IO that differentiates FPGAs.
You worked at Xilinx and you're not aware that FPGA is not a growing segment?
What is replacing it? Single board computers? Or are APUs from ARM "good enough" and "cheap enough" now to replace FPGA?
There is literally no market for FPGA as coprocessor/accelerator and there never was (that was some kind of pipe/hype dream before GPGPU took off). Where there is a market for them (prototyping ASICs, automotive, whatever, network switches, etc) there is no replacement but there is also no growth.
3 replies →
So Intel found optimists who think they can make Altera more competitive? It's a success. Success with Intel products would be better, and excellence at M&A is hard to convert into excellence at chipmaking, but it's better than nothing.
It seems FPGA can now do things for LLM's so there might be some future in that
https://www.achronix.com/blog/accelerating-llm-inferencing-f...
Its never going to be as efficient as ASIC and the LLM market is definitely big enough for ASICs to be viable.
I hear this a lot, but in my experience this isn't true at all.
A Versal AI Edge FPGA has a theoretical performance of 0.7TFLOPs just from the DSPs alone, while consuming less power than a Raspberry Pi 5 and this is ignoring the AI Engines, which are exactly the ASICs that you are talking about. They are more power efficient than GPUs, because they don't need to pretend to run multiple threads each with their own register files or hide memory latency by swapping warps. Their 2D NOC plus cascaded connections allow them to have a really high internal memory bandwidth in-between the tiles at low power.
What they are missing is processing in memory, specifically LPDDR-PIM for GEMV acceleration. The memory controllers simply can't deliver a memory bandwidth that is competitive with what Nvidia has and I'm talking about boards like Jetson Orin here.
4 replies →
This is past tense no?
There's been neural processing chips since before LLM craze [1].
[1]: https://en.wikipedia.org/wiki/Neural_processing_unit#History
If LLM can leverage on the new efficient attention mechanism based the FFT architecture discovered by Google then FPGA can be the new hot stuff [1]:
[1] The FFT Strikes Back: An Efficient Alternative to Self-Attention (168 comments):
https://news.ycombinator.com/item?id=43182325
Alteras tools seemed more civilized than Xilinx's, in my limited experience.
Altera toolchain was a tad nicer than xilinx as of 2020, just saying. Still horrible, but at least the IDE wasn't a laggy Electron abomination.