Comment by this_user
2 days ago
IMO the key problem that OpenAI have is that they are all-in on AGI. Unlike a Google, they don't have anything else of any value. If AGI is not possible, or is at least not in reach within the next decade or so, OpenAI will have a product in the form of AI models that have basically zero moat. They will be Netscape in a world where Microsoft is giving away Internet Explorer for free.
Meanwhile, Google would be perfectly fine. They can just integrate whatever improvements the actually existing AI models offer into their other products.
I've also thought of this and what's more, Google's platform provides them with training from YouTube, optimal backend access to the Google Search index for grounding from an engine they've honed for decades, training from their smartphones, smart home devices and TV's, Google Cloud... And as you say, also the reverse; empowering their services from said AI, too.
They can also run AI as a loss leader like with Antigravity.
Meanwhile, OpenAI looks like they're fumbling with that immediately controversial statement about allowing NSFW after adult verification, and that strange AI social network which mostly led to Sora memes outside of it.
I think they're going to need to do better. As for coding tools, Anthropic is an ever stronger contender there, if they weren't pressured from Google already.
> they are all-in on AGI
What are you basing this on? None of their investor-oriented marketing says this.
https://openai.com/charter/
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
Note that it doesn't say: "Our mission is to maximize shareholder value, and we develop AI systems to do that".
In fairness, no company’s mission statement says “maximize shareholder value” because it doesn’t need to be said - it’s implicit. But I agree that AGI is at the forefront of OpenAI’s mission in a way it isn’t for Google - the nonprofit roots are not gone.
2 replies →
If your mission is to build AGI, and building and deploying it will take many years, an appropriate strategy to accomplish that goal is to find other revenue streams that will make the long haul possible.
The opening lines of their mission statement is direct about this:
"OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity."
and
"We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."
https://openai.com/about/
I don't know what the moneyed insiders think OpenAI is about, but Sam Altman's public facing thoughts (which I consider to be marketing) are definitely oriented toward making it look like they are all-in on AGI:
See:
(1) https://blog.samaltman.com/the-gentle-singularity (June, 2025) - "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be."
- " It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year."
(2) https://blog.samaltman.com/three-observations (Feb, 2025) - "Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity."
- "In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today."
(3) https://blog.samaltman.com/reflections (Jan, 2025) - "We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history"
- "We are now confident we know how to build AGI as we have traditionally understood it."
(4) https://ia.samaltman.com/ (Sep, 2024) - "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there."
(5) https://blog.samaltman.com/the-merge (Dec, 2017) - "A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species). Most guesses seem to be between 2025 and 2075."
(I omitted about as many essays. The hype is strong in this one.)
"don't have anything else of any value. " ?
OpenAI is still de facto the market leader in terms of selling tokens.
"zero moat" - it's a big enough moat that only maybe four companies in the world have that level of capability, they have the strongest global brand awareness and direct user base, they have some tooling and integrations which are relatively unique etc..
'Cloud' is a bigger business than AI at least today, and what is 'AWS moat'? When AWS started out, they had 0 reach into Enterprise while Google and AWS had infinity capital and integration with business and they still lost.
There's a lot of talk of this tech as though it's a commodity, it really isn't.
The evidence is in the context of the article aka this is an extraordinary expensive market to compete in. Their lack of deep pockets may be the problem, less so than everything else.
This should be an existential concern for AI market as a whole, much like Oil companies before highway project buildout as the only entities able to afford to build toll roads. Did we want Exxon owning all of the Highways 'because free market'?
Even more than Chips, the costs are energy and other issues, for which Chinese government has a national strategy which is absolutely already impacting the AI market. If they're able to build out 10x data centres at offer 1/10th the price at least for all the non-Frontier LLM, and some right at the Frontier, well, that would be bad in the geopolitical sense.
The AWS moat is a web of bespoke product lock-in and exorbitant egress fees. Switching cloud providers can be a huge hassle if you didn't architect your whole system to be as vendor-agnostic as possible.
If OpenAI eliminated their free tier today, how many customers would actually stick around instead is going to Google's free AI? It's way easier to swap out a model. I use multiple models every day until the free frontier tokens run out, then I switch.
That said, idk why Claude seems to be the only one that does decent agents, but that's not exactly a moat; it's just product superiority. Google and OAI offer the same exact product (albeit at a slightly lower level of quality) and switching is effortless.
There are quite large 'switching costs' from moving a solution that's dependent on on model and ecosystem, to another.
Models have to significantly outperform on some metric in order to even justify looking at it.
Even for smaller 'entrenchements' like individual developers - Gemeni 3 had our attention for all of 7 days, now that Opus 4.5 is out, well, none of my colleagues are talking abut G3 anymore. I mean, it's a great model, but not 'good enough' yet.
I use that as an example to illustrate broader dynamics.
Open AI, Anthropic and Google are the primary participants here, with Grok possibly playing a role, and of course all of the Chinese models being an unknown quantity because they're exceptional in different ways.
3 replies →
Selling tokens at a massive loss, burning billions a quarter isn't the win you think it is. They don't have a moat bc they literally just lost the lead, you only can have a moat when you are the dominant market leader which they never were in the first place.
All indications are that selling tokens is a profitable activity for all of the AI companies - at least in terms of compute.
OpenAI loses money on free users and paying the absurdly high salaries that they've chosen to offer.
3 replies →
Gemeni does not have 'the lead' in anything but a benchmark.
The most applicable benchmarks right now are in software, and devs will not switch from Claude Code or Codex to Antigravity, it's not even a complete product.
This again highlights quite well the arbitrary nature of supposed 'leads' and what that actually means in terms of product penetration.
And it's not easy to 'copy' these models or integrations.
4 replies →
I think you're measuring the moat of developing the first LLMs but the moat to care about is what it'll take to clone the final profit generating product. Sometimes the OG tech leader is also the long term winner, many times they are not. Until you know what the actual giant profit generator is (e.g. for Google it was ads) then it's not really possible to say how much of a moat will be kept around it. Right now, the giant profit generator is not seeming to be the number of tokens generated itself - that is really coming at a massive loss.
I mean, on your Cloud point I think AWS' moat might arguably be a set of deep integrations between services, and friendly API's that allow developers to quickly integrate and iterate.
If AWS' was still just EC2, and S3 then I would argue they had very little moat indeed.
Now, when it comes to Generative AI models, we will need to see where the dust settles. But open-weight alternatives have shown that you can get a decent level of performance on consumer grade hardware.
Training AI is absolutely a task that needs deep pockets, and heavy scale. If we settle into a world where improvements are iterative, the tooling is largely interoperable... Then OpenAI are going to have to start finding ways of making money that are not providing API access to a model. They will have to build a moat. And that moat may well be a deep set of integrations, and an ecosystem that makes moving away hard, as it arguably is with the cloud.
EC2 and S3 moat comes from extreme economies of scale. Only Google and Microsoft can compete. You would never be able to achieve S3 profitability because you are not going to get same hardware deals, same peering agreements, same data center optimization advantages. On top of that there is extremely optimized software stack (S3 runs at ~98% utilization, capacity deployed just couple weeks in advance, i.e. if they don’t install new storage, they will run out of capacity in a month).
2 replies →
> IMO the key problem that OpenAI have is that they are all-in on AGI
I think this needs to be said again.
Also, not only do we not know if AGI is possible, but generally speaking, it doesn't bring much value if it is.
At that point we're talking about up-ending 10,000 years of human society and economics, assuming that the AGI doesn't decide humans are too dangerous to keep around and have the ability to wipe us out.
If I'm a worker or business owner, I don't need AGI. I need something that gets x task done with a y increase in efficiency. Most models today can do that provided the right training for the person using the model.
The SV obsession with AGI is more of a self-important Frankenstein-meets-Pascal's Wager proposition than it is a value proposition. It needs to end.
Why would AGI not be possible?
It might be hard, it might be difficult, but it is definitely possible. Us humans are the evidence for that.
Theoretically possible doesn't mean we're capable of doing it. Like, it's easy to say "I'm gonna boil the ocean" and another thing for you personally to succeed at it while on a specific beach with the contents of several Home Depots.
Humans tend to vastly underestimate scale and complexity.
Because human brains are giant three-dimensional processors containing billions of neurons (each with computationally complex behaviors), each one performing computations >3 orders of magnitude more efficiently than transistors do, to train an intelligence with trillions of connections in real time, while being attached to incredibly sophisticated sensors and manipulators.
And despite all that, humans are still just made of dirt.
Even if we can get silicon to do some of these tricks, that'd require multiple breakthroughs, and it wouldn't be cost-competitive with humans for quite a while.
I would even think it's possible that building brain-equivalent structures that consume the same power, and can do all the stuff for the same amount of resources, is a so far out science fiction proposition, that we can't even give a prediction as to when it will happen, and for practical purposes, biological intelligences will have an insurmountable advantage for even the furthest foreseeable future once you consider the economics of humans vs machines.
3 replies →
That’s rather presupposing materialism (in the philosophy of mind sense) is correct. That seems to be the consensus theory, but it’s not be shown ‘definitely’ true.
So, you're a business owner and you've decided we need AGI bc you're fine. You've no one to blame when the Revolution comes.
You clearly do not understand AGI. It's a gamble that really is most easily explained by saying, creating a god. That thing won't hate us. We create its oxygen - data. If anything, it would empower us to make of it.
It can make its own data. It's god-like, after all.
The moat for any frontier LLM developer will be access to proprietary training data. OpenAI is spending some of their cash to license exclusive rights to third party data, and also hiring human experts in certain fields just to create more internal training data. Of course their competitors are also doing the same. We may end up in a situation where each LLM ends up superior in some domains and inferior in others depending on access to high quality training data.
"Needs cash" is not a moat.
Not only this, but there is a compounded bet that it’ll be OpenAI that cracks AGI and not another lab, particularly Google from which LLMs come in the first place. What makes OpenAI researchers so special at this point?
What's more -- how long can they keep the lid on AGI? If anyone actually cracks it... surely competitors are only a couple months behind. At least that seems to be the case with every new model thus far.
Also, they'll have garbage because the curve is sinusoidal and not anything else. Regardless of the moat, the models won't be powerful enough to do a significant amount of work.
> They can just integrate whatever improvements the actually existing AI models offer into their other products.
If this is what users actually want.
Yes, as is implied by the word "improvements"
Which, as practice shows, tend to be understood differently by customers and PMs.
This is how I look at Meta as well. Despite how much it is hated on here fb/ig/whatsapp aren’t dying.
AI not getting much better from here is probably in their best interest even.
It’s just good enough to create the slop their users love to post and engage with. The tools for advertisers are pretty good and just need better products around current models.
And without new training costs “everyone” says inference is profitable now, so they can keep all the slopgen tools around for users after the bubble.
Right now the media is riding the wave of TPUs they for some reason didn’t know existed last week. But Google and meta have the most to gain from AI not having any more massive leaps towards agi.
[dead]
They're both all in on being a starting point to the Internet. Painting with a broad brush that was Facebook or Google Search. Now it's Facebook, Google Search, and ChatGPT.
There is absolutely a moat. OpenAI is going to have a staggering amount of data on its users. People tell ChatGPT everything and it probably won't be limited to what people directly tell ChatGPT.
I think the future is something like how everyone built their website with Google Analytics. Everyone will use OpenAI because they will have a ton of context on their users that will make your chatbot better. It's a self perpetuating cycle because OpenAI will have the users to refine their product against.
yeah but your argument is true for every llm provider. so i don't see how it's a moat since everyone who can raise money to offer an llm can do the same thing. and google and microsoft doesn't need to find llm revenue it can always offer it at a loss if it chooses unless it's other revenue streams suddenly evaporate. and tbh i kind of doubt personalization is as deep of a moat as you think it is.
Everyone could raise and build a search engine or social network. Many did and none of them dethroned Google or Facebook.
7 replies →