← Back to context

Comment by api

1 year ago

How is it that cloud LLMs can be so much cheaper? Especially given that local compute, RAM, and storage are often orders of magnitude cheaper than cloud.

Is it possible that this is an AI bubble subsidy where we are actually getting it below cost?

Of course for conventional compute cloud markup is ludicrous, so maybe this is just cloud economy of scale with a much smaller markup.

My guess is two things:

1. Economies of scale. Cloud providers are using clusters in the tens of thousands of GPUs. I think they are able to run inference much more efficiently than you would be able to in a single cluster just built for your needs.

2. As you mentioned, they are selling at a loss. OpenAI is hugely unprofitable, and they reportedly lose money on every query.

I think batch processing of many requests is cheaper. As each layer of the model is loaded into cache, you can put through many prompts. Running it locally you don't have that benefit.

> Especially given that local compute, RAM, and storage are often orders of magnitude cheaper than cloud

He uses old, much less efficient GPUs.

He also did not select his living location based on the electricity prices, unlikely the cloud providers.

It's cheaper because you are unlikely to run your local AI at top capacity 24/7 so you have unused capacity which you are paying for.

  • They are specifically referring to usage of APIs where you just pay by the token, not by compute. In this case, you aren’t paying for capacity at all, just usage.

It is shared between users and better utilized and optimized.

  • "Sharing between users" doesn't make it cheaper. It makes it more expensive due to the inherent inefficiencies of switching user contexts. (Unless your sales people are doing some underhanded market segmentation trickery, of course.)

    • No, batched inference can work very well. Depending on architecture, you can get 100x or even more tokens out of the system if you feed it multiple requests in parallel.

      2 replies →

Isn't that just because they can get massive discounts on hardware buying in bulk (for lack of a proper term) + absorb losses?

  • All that, but also because they have those GPUs with crazy amounts of RAM and crazy bandwidth? So the TPS is that much higher, but in terms of power, I guess those boards run in the same ballpark of power used by consumer GPUs?