Comment by adam_arthur

3 months ago

Storage doesn't require the same capex/upfront investment to get that margin.

How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing.

Buying hard drives and slotting them in also has capex associated with it, but far less in total, I'd guess.

  How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing.

They don't, though! I can buy hardware off of the shelf, host open source models on it, and then charge for inference:

https://parasail.io, https://www.baseten.co

  • Yes, which is why the companies that develop the models aren't cost viable. (Google and others who can subsidize it at a loss obviously are excepted)

    Where is the return on the model development costs if anybody can host a roughly equivalent model for the same price and completely bypass the model development cost?

    Your point is inline with the entire bear thesis on these companies.

    For any use cases which are analytical/backend oriented, and don't scale 1:1 with number of users (of which there are a lot), you can already run a close to cutting edge model on a few thousand dollars of hardware. I do this at home already

    • Open source models are still a year or so behind the SotA models released the last few months. The price to performance is definitely in favor of Open Source models however.

      DeepMind is actively using Google’s LLMs on groundbreaking research. Anthropic is focused on security for businesses.

      For consumers it’s still a better deal for a subscription than to invest a few grand in a personal LLM machine. There will be a time in the future where diminishing returns shortens this gap significantly, but I’m sure top LLM researchers are planning for this and will do whatever they can to keep their firm alive beyond the cost of scaling.

      2 replies →

    • The other nightmare for these companies, is that any competitor can use their state of the art model for training another model. As some Chinese models are suspected to do. I personally think it's only fair, since those companies in the first place trained on a ton of data and nobody agreed to it. But it shows that training the frontier models have really low returns on investment

Yes you’re right. Capex spend is definitely higher.

In the end it comes all down to the value provided as you see in the storage example.