Comment by TechDebtDevin
2 months ago
I think they are just getting better at the edges, MCP/Tool Calls, structured output. This definitely isn't increased intelligence, but it an increase in the value add, not sure the value added equates to training costs or company valuations though.
In all reality, I have zero clue how any of these companies remain sustainable. I've tried to host some inference on cloud GPUs and its seems like it would be extremely cost prohibitive with any sort of free plan.
> how any of these companies remain sustainable
They don't, they have a big bag of money they are burning through, and working to raise more. Anthropic is in a better position cause they don't have the majority of the public using their free-tier. But, AFAICT, none of the big players are profitable, some might get there, but likely through verticals rather than just model access.
If your house is on fire, the fact that the village are throwing firewood through the windows doesn't really mean the house will stay standing longer.
Doesn’t this mean that realistically even if “the bubble never pops”, at some point money will run dry?
Or do these people just bet on the post money world of AI?
The money won’t run dry. They’ll just stop providing a free plan when the marginal benefits of having one don’t outweigh the costs any more.
3 replies →
They will likely just charge a lot more money for these services. Eg, the $200+ per months I think could become more of the entry level in 3-5 years. Saying that smaller models are getting very good, so there could be low margin direct model services and expensive verticals IMO.
1 reply →
https://www.wheresyoured.at/reality-check/
This man (in the article) clearly hates AI. I also think he does not understand business and is not really able to predict the future.
But he did make good points though. AI was perceived more dangerous when only select few mega corps (usually backing each other) were pushing its capabilities.
But now, every $50B+ company seems to have their own model. Chinese companies have an edge in local models and the big tech seems to be fighting each other like cats and dogs for a tech which has failed to generate any profit while masses are draining the cash out from the companies with free usage and ghiblis.
What is the concrete business model here? Someone at google said "we have no moat" and i guess he was right, this is becoming more and more like a commodity.
3 replies →
If you read any work from Ed Zitron [1], they likely cannot remain sustainable. With OpenAI failing to convert into a for-profit, Microsoft being more interested in being a multi-modal provider and competing openly with OpenAI (e.g., open-sourcing Copilot vs. Windsurf, GitHub Agent with Claude as the standard vs. Codex) and Google having their own SOTA models and not relying on their stake in Anthropic, tarrifs complicating Stargate, explosion in capital expenditure and compute, etc., I would not be surprised to see OpenAI and Anthropic go under in the next years.
1: https://www.wheresyoured.at/oai-business/
I see this sentiment everywhere on hacker news. I think it’s generally the result of consuming the laziest journalism out there. But I could be wrong! Are you interested in making a long bet banking your prediction? I’m interested in taking the positive side on this.
While some critical journalism may be simplistic, I would not qualify it as lazy. Much of it is deeply nuanced and detail-oriented. To me, lazy would be publications regurgitating the statements of CEOs and company PR people who have a vested interest in making their product seem appealing. Since most of the hype is based on perceived futures, benchmarks, or the automation of the easier half of code development, I consider the simplistic voices asking "Where is the money?" to be important because most people seem to neglect the fundamental business aspects of this sector.
I am someone who works professionally in ML (though not LLM development itself) and deploys multiple RAG- and MCP-powered LLM apps in side businesses. I code with Copilot, Gemini, and Claude and read and listen to most AI-industry outputs, be they company events, papers, articles, MSM reports, the Dwarkesh podcast, MLST, etc. While I acknowledge some value, having closely followed the field and extensively used LLMs, I find the company's projections and visions deeply unconvincing and cannot identify the trillion-dollar value.
While I never bet for money and don't think everything has to be transactional or competitive, I would bet on defining terms and recognizing if I'm wrong. What do you mean by taking the positive side? Do you think OpenAI's revenue projections are realistic and will be achieved or surpassed by competing in the open market (i.e., excluding purely political capture)?
Betting on the survival of the legal entity would likely not be the right endpoint because OpenAI could likely be profitable with a small team if it restricted itself to serving only GPT 4.1 mini and did not develop anything new. They could also be acquired by companies with deeper pockets that have alternative revenue streams.
But I am highly convinced that OpenAI will not have a revenue of > 100 billion by 2029 while being profitable [1] and willing to take my chances.
1: https://www.reuters.com/technology/artificial-intelligence/o...
1 reply →
There's still the question of whether they will try to change the architecture before they die. Using RWKV (or something similar) would drop the costs quite a bit, but will require risky investment. On the other hand some experiment with diffusion text already, so it's slowly happening.