Comment by etaioinshrdlu
6 months ago
This is wrong because LLMs are cheap enough to run profitably on ads alone (search style or banner ad style) for over 2 years now. And they are getting cheaper over time for the same quality.
It is even cheaper to serve an LLM answer than call a web search API!
Zero chance all the users evaporate unless something much better comes along, or the tech is banned, etc...
> LLMs are cheap enough to run profitably on ads alone
> It is even cheaper to serve an LLM answer than call a web search API
These, uhhhh, these are some rather extraordinary claims. Got some extraordinary evidence to go along with them?
I've operated a top ~20 LLM service for over 2 years, very comfortably profitably with ads. As for the pure costs you can measure the cost of getting an LLM answer from say, OpenAI, and the equivalent search query from Bing/Google/Exa will cost over 10x more...
So you don't have any real info on the costs. The question is what OpenAI's profit margin is here, not yours. The theory is that these costs are subsidized by a flow of money from VCs and big tech as they race.
How cheap is inference, really? What about 'thinking' inference? What are the prices going to be once growth starts to slow and investors start demanding returns on their billions?
2 replies →
Profitably covering R&D or profitably using the subsidized models?
1 reply →
So you're not running an LLM, you're running a service built on top of a subsidized API.
1 reply →
https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch..., also note the "objections" section
Anecdotally thanks to hardware advancements the locally-run AI software I develop has gotten more than 100x faster in the past year thanks to Moore's law
What hardware advancement? There's hardly any these days... Especially not for this kind of computing.
12 replies →