Comment by bigyabai
15 hours ago
I'm not an insider, but ASICs come with their own suite of issues and might be obsolete if a different architecture becomes popular. They'll have a much shorter lifespan than Nvidia hardware in all likelihood, and will probably struggle to find fab capacity that puts them on equal footing in performance. For example, look at the GPU shortage that hit crypto despite hundreds of ASIC designs existing.
The industry badly needs to cooperate on an actual competitor to CUDA, and unfortunately they're more hostile to each other today than they were 10 years ago.
You can build ASICs to be a lot more energy efficient than current GPUs, especially if your power budget is heavily bound by raw compute as opposed to data movement bandwidth. The tradeoff is much higher latency for any given compute throughput, but for workloads such as training or even some kinds of "deep thinking inference" you don't care much about that.