Comment by lemoncookiechip
12 hours ago
This makes no sense when you zoom-out. None of these companies, be it Anthropic, OpenAI, xAI, Google, Meta, Microsoft, are profitable in the AI department, they're all bleeding money and using funds their parent company and/or investors, primarily investors gave them. The Chinese models are keeping up with them, while offering the models for free and able to run on consumer grade hardware, and more importantly they train them for cheap. AI models are an extremely volatile product that can be outdated in the matter of a few weeks. Meaning you have to keep dumping resources into developing better models which has no end-goal besides infinite scaling. Lets look at how users behave in the real world:"I don't use Gemini because it's worse than Claude at XYZ." That's it. Now Gemini has a worse model and people are going to Anthropic... what happens when Anthropics model is arguably worse than everyone else's? What does it matter if they can commercialize if their product is objectively worse?
I understand that America dominates in distribution, integration, enterprise contracts, ecosystems, infra... The article isn't wrong, it's just that that dominance is fragile and requires constant upgrading.
But what is the point of that if you have to infinitely scale because the opposition is right behind you at all times ready to usurp you... You CANNOT scale infinitely, the VC money will run out at some point and then everyone will have to downscale everything to meet the real costs associated with SOTA models, they'll have to be able to use subscriptions, and other monetization to cover those insane costs, we just saw SORA shut down because it was bleeding money far too fast while the Chinese released video models that far surpassed it back to back to back...
EDIT: Hell, one of the most critical aspects is integration of the models into other products, and even on this end open-source is keeping up (and will eventually outpace when the VC money dries out) with these big companies.
> None of these companies, be it Anthropic, OpenAI, xAI, Google, Meta, Microsoft, are profitable in the AI department,
Citation needed.
All reporting is that they are profitable on the inference side and all the VC money is going to building more data centers to run more inference. (Note that the coding subscription models are probably only break even on average - the money is in the API)
> The Chinese models are keeping up with them, while offering the models for free and able to run on consumer grade hardware, and more importantly they train them for cheap.
No one is running DeepSeek v4 (a 1.6T token model) on consumer hardware.
They aren't much cheaper to train the US models. Training is subsidized by the big Chinese tech companies. They are slightly cheaper because they are smaller (and weaker) models than the 5T and 10T models the US frontier labs are training, and the US labs are paying for a more diverse set of RL data (which shows up in diverse benchmark performance).
> we just saw SORA shut down because it was bleeding money far too fast while the Chinese released video models that far surpassed it back to back to back...
Ironically this proves the point.
OpenAI didn't shutdown Sora, just the subscription version and weird social network thing. You can still access it via API.
The Chinese models are API models and probably just as profitable for them as the LLMs are for the US frontier labs.
[1] has prices for video models. There is a big range, but Google's Veo model and OpenAI's Sora are around the same price as the Chinese models.
[1] https://openrouter.ai/models?output_modalities=video
What does profitable on inference mean? As far as I can tell, none of these companies have rigidly defined it, let alone it being a GAAP number. And yeah, if you subtract out all your R&D, payroll, sales, marketing, and other overhead, and get someone else to take on the debt or dig into their free cash flow to build the hugely expensive infrastructure on which you depend, it'd be pretty hard to not be "profitable". It's almost humorous how dumb of a metric "profitable on inference" is.
Ask yourself if AI was so profitable, why don't any of the big hyperscalers break out AI revenue in their earnings. OpenAI and Anthropic both project huge losses for the next couple years, it's not hard to find.
The real problem is, as the GP comment pointed out, that they can never stop training. As long as they're committed to building these behemoth models, the second they stop training, someone else will catch up and everybody will switch over because it's trivial to do so.
> OpenAI and Anthropic both project huge losses for the next couple years, it's not hard to find.
No. Anthropic at least expects to be profitable this year:
> Anthropic expects its gross profit margin, which measures how much revenue it makes compared to the cost of producing that revenue—largely from running servers—to swing from negative 94% last year to as much as 50% this year and 77% in 2028.
https://archive.is/GdLGD
> And yeah, if you subtract out all your R&D, payroll, sales, marketing, and other overhead, and get someone else to take on the debt or dig into their free cash flow to build the hugely expensive infrastructure on which you depend, it'd be pretty hard to not be "profitable".
I think excluding capital expense on infrastructure isn't unreasonable and is done in most industries.
It's worth noting that AI infrastructure has turned out to be an unbelievably good investment. Inference on a 4 year old H100 chip costs more now than it did brand new! That makes the hyperscaler's depreciation schedules look very (and unexpectedly!) conservative (!!)
If the Chinese models couldn't Distill from the larger models they'd be at gpt2 or 3 levels
Even if that is true, it doesn't change the reality that they can compete. Also, if we start going that route, American models wouldn't have any quality data to train on if they respected copyright themselves. Their whole product was built on the work of others, on our work, our art.... without compensation, without acknowledgement.
Literally not a single one of these AI companies, regardless of where they are in the world has any right to complain about someone copying their work.
To quote Elon Musk in court:
> OpenAI’s counsel asked Musk whether xAI has ever “distilled” technology from OpenAI.
> Musk: “Generally AI companies distill other AI companies.”
> “Is that a yes?” Savitt asked.
> Musk: “Partly.”
From https://www.interconnects.ai/p/the-distillation-panic which is worth reading in full.