Comment by leventilo
3 days ago
The energy numbers are the real story here, 70-82% reduction on CPU inference. If 1-bit models ever get good enough, running them on commodity hardware with no GPU budget changes who can deploy LLMs. That's more interesting than the speed benchmarks imo.
No comments yet
Contribute on Hacker News ↗