← Back to context

Comment by Eupolemos

7 days ago

Why are you talking price when we are talking local AI?

That doesn't make any sense to me. Am I missing something?

Your electricity is free?

  • Apple silicon is crazy efficient as well as being comparable to GPUs in performance for max and ultra chips.

  • If you have the hardware to run expensive models, is the cost of electricity much of a factor? According to Google, the average price in the Silicon Valley Area is $0.448 per kWh. An RTX 5090 costs about $4,000 and has a peak power consumption of 1000 W. Maxing out that GPU for a whole year would cost $3,925 at that rate. It's not particularly more expensive than that hardware itself.

    • At that point it'd be cheaper to get an expensive subscription to a cloud platform AI product. I understand the case for local LLMs but it seems silly to worry about pricing for cloud-based offerings but not worry about pricing for locally run models. Especially since running it locally can often be more expensive