← Back to context

Comment by fc417fc802

14 hours ago

In another context I might see it as vendor financing. However given that Google and Anthropic are competitors in this segment and given that Google has previously invested in them I'd rather see this as a sort of bartered stock purchase presumably for the purpose of hedging against failure. If Anthropic wins the race and it turns out to be winner takes all and you happen to own half of Anthropic then you still win half of the immediate spoils even though your internal team lost. If you view losing the race as an existential threat then having all your eggs in the one basket is a terrible proposition.

Sure, since Google is both a supplier and a competitor, it’s both vendor finance and hedging. Also, it increases their investment in AI, in general.

Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?

  • Are we stoping too early in this analysis though?

    Google versus OpenAI and Anthropic, sure, but Microsoft is deep into OpenAI. Google helping Anthropic is also putting MS into a corner (one that may even be shrinking? Copilot and openAI financing hurting their brand, rumours of deep displeasure at OpenAIs promises v returns).

    Seen from afar I see Google happy to provide TPUs for money (improving Googles strategic positioning), torpedoing confidence in LLMs with their search AI summaries, and using their bankroll to force larger competitors (MS in particular), to keep investments high regardless of performance and user revolts and internal tensions with Sam Altmans sales approach. Plus, Anthropic is in ‘the lead’ right now product wise, so grooming them as a potential purchase would also seem to be a strategic hedge in the long term.

    • MS is not so deep with OpenAI, it's not all black and white, they have signed several distribution deals where Claude drives Copilot [1], since Anthropic and MS are better aligned in the Enterprise market, it makes sense. It also makes sense for MS not to lose ground anywhere at this point and play with the best. Actually, any cash rich company that is not OpenAI or Anthropic wants to be close-by when any of the two needs money. That's the ultimate win they can aspire for right now, get a financial slice of frontier models on one hand while not losing revenue on the other given the existential ordeal AI represents for them.

      1. https://www.microsoft.com/en-us/microsoft-365/blog/2026/03/0...

    • You make some good points, but this part feels like a wild overreach:

          > torpedoing confidence in LLMs with their search AI summaries
      

      That is some real tin foil hat thinking.

      1 reply →

  • > Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?

    By the time it is a problem, it will be too late.

How can there be a "winner takes it all" situation with AI?

OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.

Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.

  • Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players.

  • Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top.

    So if I'm Google I'd want a decent chunk of at least one of them.

  • The first to AGI, or a close approximation, is the winner. That’s what the investors in Anthropic and OpenAI are betting on.

    I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.

    • "The first to AGI, or a close approximation, is the winner. "

      But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?

    • So, what will AGI be able to do that will make that bet pay off? Human-like intelligence is already very common. Vastly better than human intelligence seems like it would be worth the expense, but I don't know where we'd get suitable training data.

      1 reply →

    • This depends on a fantasy cascade of functional consequences of AGI, whatever that acronym even means anymore.

      It is just cargo cult financing at this point.

  • 2 years? 2 years ago, gpt-4o was OpenAI's flagship model. The gap is real, but much smaller than 2 years.

  • I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore.

    • This is a common canard. AI already autonomously self improves. All the training pipelines for modern frontier models are filled with AI. AI generates synthetic data, it cleans data, it judges output quality and feeds back via RL, it does hyperparameter tuning, it rewrites kernels for speed and a thousand other things.

      But: no singularity. At least not yet.

      The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.

    • That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.

      6 replies →

    • But if the second AI that can self improve comes up?

      Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.

    • If that happens catching up will be meaningless, everything we know and care about will change. You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog. There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

      2 replies →

I wonder if Google is that much a competitor. Sure, they tried to make an AI of their own.

But they also have access to an unimaginably large data set plus reach into people’s daily lives.

Seems more like partners for world domination.

$40B is not anywhere near half of Anthropic at this point. You do get the same access as nvidia, aws, and other investors, which has value.

I look at this as Google needs a competitor. While Anthropic seems to be the flavor of the quarter OAI looks like such a dumpster fire right now that it's in Google's best interest to help keep Anthropic moving towards winning the #2 spot. I say the #2 spot because it doesn't matter how good this week's LLM is. Until someone else owns the infra and has an actually profitable business model they're all just lighting money and the world around us on fire.

I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.