Comment by ACCount37
17 hours ago
LLMs get results, yes. They are getting adopted, and they are making money.
Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.
A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.
And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.
Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.
> Frontier models are all profitable.
This is an extraordinary claim and needs extraordinary proof.
LLMs are raising lots of investor money, but that's a completely different thing from being profitable.
You don't even need insider info - it lines up with external estimates.
We have estimates that range from 30% to 70% gross margin on API LLM inference prices at major labs, 50% middle road. 10% to 80% gross margin on user-facing subscription services, error bars inflated massively. We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.
The only source of uncertainty is: how much inference do the free tier users consume? Which is something that the AI companies themselves control: they are in charge of which models they make available to the free users, and what the exact usage caps for free users are.
Adding that up? Frontier models are profitable.
This goes against the popular opinion, which is where the disbelief is coming from.
Note that I'm talking LLMs rather than things like image or video generation models, which may have vastly different economics.
what about training?
2 replies →
Dario Amodei from Anthropic has made the claim that if you looked at each model as a separate business, it would be profitable [1], i.e. each model brings in more revenue over its lifetime than the total of training + inference costs. It's only because you're simultaneously training the next generation of models, which are larger and more expensive to train, but aren't generating revenue yet, that the company as a whole loses money in a given year.
Now, it's not like he opened up Anthropic's books for an audit, so you don't necessarily have to trust him. But you do need to believe that either (a) what he is saying is roughly true or (b) he is making the sort of fraudulent statements that could get you sent to prison.
[1] https://www.youtube.com/watch?v=GcqQ1ebBqkc&t=1014s
He's speaking in a purely hypothetical sense. The title of the video even makes sure to note "in this example". If it turned this wasn't true of anthropic, it certainly wouldn't be fraud.
> Frontier models are all profitable.
They generate revenue, but most companies are in the hole for the research capital outlay.
If open source models from China become popular, then the only thing that matters is distribution / moat.
Can these companies build distribution advantage and moats?
In this comment you proceeded to basically reinvent the meaning of "profitable company", but sure. I won't even get into the point of comparing LLM to humans, because I choose not to engage with whoever doesn't have the human decency, humanistic compass, or basic phylosophical understanding of how putting LLMs and human labor on the same level to justify hallucinations and non-determinism is deranged and morally bankrupt.
You should go and work in a call center for a year, on the first line.
Then come back and tell me how replacing human labor with AI is "deranged and morally bankrupt".
red herring. just because some jobs are bad (maybe shouldn't exist like that in the first place) doesn't make this movement humanistic