← Back to context

Comment by zozbot234

8 hours ago

But commodity hardware that's right-sized for your own private needs is many orders of magnitude cheaper than datacenter hardware that's intended to serve millions of users simultaneously while consuming gigawatts in power. You're mostly paying for that hardware when you buy LLM tokens, not just for power efficiency. And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.

>And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.

^ Fair. Yep, I agree the calculus changes if you don't have _any_ local hardware and you're needing to factor in the cost of acquiring such hardware.

When I did this napkin math, I was mostly interested in the energy aspect, using cost as a proxy. I was calculating the $/token (taking into consideration the cost of a KWh from my utility, the measured power draw of my M1 work machine, and the measured tokens per second processed by a ~20BP open-weight model). I then compared this to the published $/token rate of a frontier provider, and it was something like two orders of magnitude in favor of the frontier model. I get it, they're subsidizing, but I've got to imagine there's some truth in the numbers.

I wonder, does (or will) the $/token ratio fall asymptotically toward the cost of electricity? In my mind I'm drawing a parallel to how the value of mined cryptocurrency approximately tracks the cost of electricity... but I might be misremembering that detail.

I doubt it because you aren't going to get the utilisation that a commercial setup would. No point wasting tons of money on hardware that is sat idle most of the time.

  • If you're running agentic workloads in the background (either some coding agent or personal claw-agent type) that's enough utilization that the hardware won't be sitting idle.