← Back to context

Comment by sholain

2 days ago

"don't have anything else of any value. " ?

OpenAI is still de facto the market leader in terms of selling tokens.

"zero moat" - it's a big enough moat that only maybe four companies in the world have that level of capability, they have the strongest global brand awareness and direct user base, they have some tooling and integrations which are relatively unique etc..

'Cloud' is a bigger business than AI at least today, and what is 'AWS moat'? When AWS started out, they had 0 reach into Enterprise while Google and AWS had infinity capital and integration with business and they still lost.

There's a lot of talk of this tech as though it's a commodity, it really isn't.

The evidence is in the context of the article aka this is an extraordinary expensive market to compete in. Their lack of deep pockets may be the problem, less so than everything else.

This should be an existential concern for AI market as a whole, much like Oil companies before highway project buildout as the only entities able to afford to build toll roads. Did we want Exxon owning all of the Highways 'because free market'?

Even more than Chips, the costs are energy and other issues, for which Chinese government has a national strategy which is absolutely already impacting the AI market. If they're able to build out 10x data centres at offer 1/10th the price at least for all the non-Frontier LLM, and some right at the Frontier, well, that would be bad in the geopolitical sense.

The AWS moat is a web of bespoke product lock-in and exorbitant egress fees. Switching cloud providers can be a huge hassle if you didn't architect your whole system to be as vendor-agnostic as possible.

If OpenAI eliminated their free tier today, how many customers would actually stick around instead is going to Google's free AI? It's way easier to swap out a model. I use multiple models every day until the free frontier tokens run out, then I switch.

That said, idk why Claude seems to be the only one that does decent agents, but that's not exactly a moat; it's just product superiority. Google and OAI offer the same exact product (albeit at a slightly lower level of quality) and switching is effortless.

  • There are quite large 'switching costs' from moving a solution that's dependent on on model and ecosystem, to another.

    Models have to significantly outperform on some metric in order to even justify looking at it.

    Even for smaller 'entrenchements' like individual developers - Gemeni 3 had our attention for all of 7 days, now that Opus 4.5 is out, well, none of my colleagues are talking abut G3 anymore. I mean, it's a great model, but not 'good enough' yet.

    I use that as an example to illustrate broader dynamics.

    Open AI, Anthropic and Google are the primary participants here, with Grok possibly playing a role, and of course all of the Chinese models being an unknown quantity because they're exceptional in different ways.

    • Switching a complex cloud deployment from AWS to GCP might take a dedicated team of engineers several months. Switching between models can be done by a single person in an afternoon (often just 5 minutes). That's what we're talking about.

      That means that none of these products can ever have a high profit margin. They have to keep margins razor thin at best (deeply negative at present) to stay relevant. In order to achieve the kinds of margins that real moats provide, these labs need major research breakthroughs. And we haven't had any of those since Attention is All You Need.

      1 reply →

    • Aren't you contradicting yourself? To even be considering all the various models, the switching cost can't be that large.

      I think the issue here isn't really that it's "hard to switch" it's that it's easier yet to wait 1 more week to see what your current provider is cooking up.

      But if any of them start lagging for a few months I'm sure a lot of folks will jump ship.

Selling tokens at a massive loss, burning billions a quarter isn't the win you think it is. They don't have a moat bc they literally just lost the lead, you only can have a moat when you are the dominant market leader which they never were in the first place.

  • All indications are that selling tokens is a profitable activity for all of the AI companies - at least in terms of compute.

    OpenAI loses money on free users and paying the absurdly high salaries that they've chosen to offer.

  • Gemeni does not have 'the lead' in anything but a benchmark.

    The most applicable benchmarks right now are in software, and devs will not switch from Claude Code or Codex to Antigravity, it's not even a complete product.

    This again highlights quite well the arbitrary nature of supposed 'leads' and what that actually means in terms of product penetration.

    And it's not easy to 'copy' these models or integrations.

    • Gemini-cli existed long before Antigravity. It took Google very little.

      And the gemini app will come preloaded on any android phone, who else can say the same?

      2 replies →

    • Speak for yourself - I cancelled the Claude Code subscription after testing Antigravity.

      It works quite well here, and my phone came with a year of free Gemini Pro, so I don't currently see a reason to pay extra.

I think you're measuring the moat of developing the first LLMs but the moat to care about is what it'll take to clone the final profit generating product. Sometimes the OG tech leader is also the long term winner, many times they are not. Until you know what the actual giant profit generator is (e.g. for Google it was ads) then it's not really possible to say how much of a moat will be kept around it. Right now, the giant profit generator is not seeming to be the number of tokens generated itself - that is really coming at a massive loss.

I mean, on your Cloud point I think AWS' moat might arguably be a set of deep integrations between services, and friendly API's that allow developers to quickly integrate and iterate.

If AWS' was still just EC2, and S3 then I would argue they had very little moat indeed.

Now, when it comes to Generative AI models, we will need to see where the dust settles. But open-weight alternatives have shown that you can get a decent level of performance on consumer grade hardware.

Training AI is absolutely a task that needs deep pockets, and heavy scale. If we settle into a world where improvements are iterative, the tooling is largely interoperable... Then OpenAI are going to have to start finding ways of making money that are not providing API access to a model. They will have to build a moat. And that moat may well be a deep set of integrations, and an ecosystem that makes moving away hard, as it arguably is with the cloud.

  • EC2 and S3 moat comes from extreme economies of scale. Only Google and Microsoft can compete. You would never be able to achieve S3 profitability because you are not going to get same hardware deals, same peering agreements, same data center optimization advantages. On top of that there is extremely optimized software stack (S3 runs at ~98% utilization, capacity deployed just couple weeks in advance, i.e. if they don’t install new storage, they will run out of capacity in a month).

    • I wouldn't call it a moat. A moat is more about switching costs rather than quality differentiation. You have a moat when your customers don't want to switch to a competitor despite that competitor having a superior product at a better price.