← Back to context

Comment by johnsimer

20 days ago

Both companies are making bank on inference

You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation

https://www.wheresyoured.at/why-everybody-is-losing-money-on... https://www.economist.com/business/2025/12/29/openai-faces-a... https://finance.yahoo.com/news/openais-own-forecast-predicts...

That is the big question. Got reliable data on that?

(My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that's just a gut feeling...)

  • https://www.wheresyoured.at/costs/

    Their AWS spend being higher than their revenue might hint at the same.

    Nobody has reliable data, I think it's fair to assume that even Anthropic is doing voodoo math to sleep at night.

    • The closed frontier models seem to sell at a substantial premium to inference on open-source models, so that does suggest that there is a decent margin to the inference. The training is where they're losing money, and the bull case is that every model makes money eventually, but the models keep getting bigger or at least more expensive to train, so they're borrowing money to make even more money later (which does need to converge somehow, i.e. they can't just keep shooting larger until the market can't actually afford to pay for the training). The bear case is that this is basically just a treadmill to stay on the frontier where they can make that premium (if the big labs ever stop they'll quickly get caught up by cheaper or even open-source models and lose their edge), in which case it's probably never going to actually become sustainable.

Could you substantiate that? That take into account training and staffing costs?

  • The parent specifically said inference, which does not include training and staffing costs.

    • But those aren't things you can really separate for proprietary models. Keeping inference running also requires staff, not just for the R&D.