← Back to context

Comment by TrackerFF

7 months ago

I don't think AI will be a winner-take-all scenario. If that is to happen, I think the following assumptions must hold:

1) The winner immediately becomes a monopoly

2) All investments are directed from competitors, to the winner

3) Research on AGI/ASI ceases

I don't see how any of these would be viable. Right now there's an incremental model arms race, with no companies holding a secret sauce so powerful that they're miles above the rest.

I think it will continue like it does today. Some company will break through with some sort of AGI model, and the competitors will follow. Then open source models will be released. Same with ASI.

The things that will be important and guarded are: data and compute.

Yeah, this is why I said "(most)". But regardless, I think it's pretty uncontroversial that not all companies currently pursuing AI will ultimately succeed. Some will give up because they aren't in the top few contenders, who will be the only ones that survive in the long run.

So maybe the issue is more about staying in the top N, and being willing to pay tons to make sure that happens.

  • >I think it's pretty uncontroversial that not all companies currently pursuing AI will ultimately succeed.

    That's probably true, but at the moment the only thing that creates something resembling a moat is the fact that progress is rapid (i.e. the top players are ~6-12 months ahead of the already commoditized options, but the gap in capabilities is quite large): if progress plateaus at all, the barrier to be competitive with the top dogs is going to drop a lot, and anyone trying to extra value out of their position is going to attract a ton of competition even from new players.

I agree with this comment.

Maybe it's just me but I haven't been model-hopping one bit. For my daily chatbot usage, I just don't feel inclined to model-hop so much to squeeze out some tiny improvement. All the models are way beyond "good enough" at this point, so I just continue using ChatGPT and switching back and forth from o3 and 4o. I would love to hear if others are different.

Maybe others are doing some hyper-advanced stuff where the edging out makes a difference, but I just don't buy it.

A good example is search engines. Google is a pseudo-monopoly because google search gives obviously better results than bing or duckduckgo. In my experience this just isn't the case for LLM's. Its more nuanced than better or worse. LLM's are more like car models where everyone makes a personal choice on which they like the best.

I agree with you, and think we are in the heady days where moat building hasn't quite begun. Regarding 1) and 3), most models have API access to facilitate quick switching and agentic AI middleware reaps the benefits of new models being better at some specific use-case than a competitor. In the not-so-distant future, I can see the walls coming up, with some version of white-listed user-agent access only. At the moment, model improvement hype and priority access are the product, but at some point capability and general access will be the product.

We are already seeing diminishing returns from compute and training costs going up, but as more and more AI is used in the wild and pollutes training data, having validated data becomes the moat.

> Right now there's an incremental model arms race

Yes, but just like in an actual arms race, we don't know if this can evolve in a winner takes all scenario very quickly and literally.

  • > just like in an actual arms race

    In an actual arms race you use your arms to actually physically wipe out your enemy.

    It's not just like an arms race.

The problem is models are decaying at incredible speed and being the leader today has limited guarantee you’ll be it tomorrow.

OpenAI has a limited protective moat because ChatGPT is synonymous with generative AI at the moment, but that isn’t any more baked in than MySpace (certainly not in the league of Twitter or Facebook).

> I don't think AI will be a winner-take-all scenario.

AI? Do you mean LLMs, GPTs, both, or other?

Why won't AI follow the technology life cycle?

It'll always be stuck in the R&D phase, never reach maturity?

It's on a different life cycle?

Once AI matures, something prevents consolidation? (eg every nation protects its champions)