Comment by felixfurtak
3 days ago
GPUs are massively parallel, sure, but they still have a terrible memory architecture and are difficult to program (and are still massively memory constrained). It's only NVidia's development in cuda that made it even feasible to create decent ML models on GPUs.
No comments yet
Contribute on Hacker News ↗