Comment by mirzap

4 days ago

They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.

Banning third-party tools has nothing to do with rate limits. They’re trying to position themselves as the Apple of AI companies -a walled garden. They may soon discover that screwing developers is not a good strategy.

They are not 10× better than Codex; on the contrary, in my opinion Codex produces much better code. Even Kimi K2.5 is a very capable model I find on par with Sonnet at least, very close to Opus. Forcing people to use ONLY a broken Claude Code UX with a subscription only ensures they loose advantage they had.

> "just a few dollars per million tokens"

Google AI Pro is like $15/month for practically unlimited Pro requests, each of which take million tokens of context (and then also perform thinking, free Google search for grounding, inline image generation if needed). This includes Gemini CLI, Gemini Code Assist (VS Code), the main chatbot, and a bunch of other vibe-coding projects which have their own rate limits or no rate limits at all.

It's crazy to think this is sustainable. It'll be like Xbox Game Pass - start at £5/month to hook people in and before you know it it's £20/month and has nowhere near as many games.

  • OpenAI only released ChatGPT 4 years ago but…

    Google has made custom AI chips for 11 years — since 2015 — and inference costs them 2-5x less than it does for every other competitor.

    The landmark paper that invented the techniques behind ChatGPT, Claude and modern AI was also published by Google scientists 9 years ago.

    That’s probably how they can afford it.

    • I agree that the TPUs are one of the things that are underestimated (based on my personal reading of HN).

      Google already has a huge competitive advantage because they have more data than anyone else, bundle Gemini in each android to siphon even more data, and the android platform. The TPUs truly make me believe there actually could be a sort of monopoly on LLMs in the end, even though there are so many good models with open weights, so little (technical) reasons to create software that only integrates with Gemini, etc.

      Google will have a lion‘s share of inferring I believe. OpenAI and Claude will have a very hard time fighting this.

  • I can see it to be £18.95 from the UK, which is almost double that. I guess this is an oversight from your part or maybe quoting from memory.

I’m not familiar with the Claude Code subscription, but with Codex I’m able to use millions of tokens per day on the $200/mo plan. My rough estimate was that if I were API billing, it would cost about $50/day, or $1200/mo. So either the API has a 6x profit margin on inference, the subscription is a loss leader, or they just rely on most people not to go anywhere near the usage caps.

  • I use GLM lite subscription for personal use. It is advertised as 3x claude code pro (the 20$ one).

    5h allowance is somewhere between 50M-100M tokens from what I can tell.

    On 200$ claude code plan you should be burning hundreds of millions of token per day to make anthropic hurt.

    IMHO subscription plans are totally banking on many users underusing them. Also LLM providers dont like to say exact numbers (how much you get , etc)

  • It's the latter. It's the average use that matters. Though I suspect API margins are also probably higher than people think.

Inference might be cheap, but I'm 100% sure Anthropic has been losing quite a lot of money with their subscription pricing with power users. I can literally see comparison between what my colleagues Claude cost when used with an API key vs when used with a personal subscription, and the delta is just massive

I wonder how many people have a subscription and don’t fully utilize it. That’s free money for them, too.

  • The trick is that the jump goes from 20 to 100 Dollar for the Pro to Max subscription. Pro is not enough for me, Max is too much. 60 would be ideal, but currently at 100 it's worth the cost.

    But this is how every subscription works. Most people lose money on their gym subscription, but the convenience takes us.

    • What can bite them in this case though is alternate providers at the same price point that can bridge the gap. e.g. you currently get a lot more bang for your buck with the $20 OpenAI Codex subscription than you get for the $20 Claude Code subscription.

Of course they bundle R&D with inference pricing, how else could you the recoup that investment.

The interesting question is: In what scenario do you see any of the players as being able to stop spending ungodly amounts for R&D and hardware without losing out to the competitors?

  • In the scenario where that market collapses, ie when we stop making significant gains with new models. It might be a while, though, who knows.

> They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.

You've described every R&D company ever.

"Synthesizing drugs is cheap - just a few dollars per million pills. They're trying to bundle pharmaceutical research costs... etc."

There's plenty of legit criticisms of this business model and Anthropic, but pointing out that R&D companies sink money into research and then charge more than the marginal cost for the final product, isn't one of them.

  • I’m not saying charging above marginal cost to fund R&D is weird. That’s how every R&D company works.

    My point was simpler: they’re almost certainly not losing money on subscriptions because of inference. Inference is relatively cheap. And of course the big cost is training and ongoing R&D.

    The real issue is the market they’re in. They’re competing with companies like Kimi and DeepSeek that also spend heavily on R&D but release strong models openly. That means anyone can run inference and customers can use it without paying for bundled research costs.

    Training frontier models takes months, costs billions, and the model is outdated in six months. I just don’t see how a closed, subscription-only model reliably covers that in the long run, especially if you’re tightening ecosystem access at the same time.

    • Yes, and my point is that thinking the cost of subscriptions is only inference, and not the research, is mistaken.

      They can totally lose money on subscriptions despite the costs of inference, because research costs have to be counted too.

      2 replies →

Didn't OpenAI spend like 10 billion on inference in 2025? Which is around the same as their total revenue?

Why do people keep saying inference is cheap if they're losing so much money from it?

  • When you have 800–900 million active users, no matter how cheap it is, your costs will be in the billions.

    • They paid about $10B on inference and had about $10B in revenue in 2025. The users and numbers of zeroes on those numbers are not relevant. What is relevant is the ratio of those numbers. They apparently are not even profitable on inference, wich is the cheap part of the whole business.

      And cost of inference tripled from $3B in 2024 to $10B in 2025, so cost of revenue linearly grows with number of users, i.e. it does not get cheaper.

      https://www.wheresyoured.at/oai_docs/

What walled garden man? There’s like four major API providers for Anthropic.

  • For example, OpenAI’s agent (Codex) is open source, and you can use any harness you want with your OpenAI subscription. Anthropic keeps its tooling closed source and forbids using third-party tooling with a Claude subscription.

"They're not losing money on subscriptions, it's just their revenue is smaller than their costs". Weird take.

  • It means the marginal cost to sell another subscription is lower than what they sell it for. I don't know if that's true, but it seems plausible.