Comment by visarga
18 hours ago
> ? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao
Unfortunately local inference is inefficient, 100s of times more inefficient than cloud. When you answer one request at a time you still have to fetch all active weights into compute units, once every token. When you run a batch of 300, you load it once and compute 300 at a time.
Compared to cloud, local inference is less flexible. You can't scale up 5x or 20x, can't have spikes, and pay for it no matter if you use it or not. But usage factor is very low, like 5%. And to run a decent model your system costs $2000 or more.
No comments yet
Contribute on Hacker News ↗