Comment by lukan

12 hours ago

How can there be a "winner takes it all" situation with AI?

OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.

Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.

Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players.

Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top.

So if I'm Google I'd want a decent chunk of at least one of them.

  • What is the argument for a duopoly when Kimi and Deepseek models are only months behind?

    It’s a commodity in the making.

    • The argument is based on one of these companies hitting the singularity, making it impossible for any other company to catch up ever. I still think it's way more likely we'll see a typical S-curve where innovation starts to plateau. But even a small chance of it happening in the future is worth a lot of money today.

      2 replies →

    • That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).

      11 replies →

    • They're months behind now and have very low market share, so as long as they stay months behind the duopoly/triopoly can hold.

The first to AGI, or a close approximation, is the winner. That’s what the investors in Anthropic and OpenAI are betting on.

I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.

  • "The first to AGI, or a close approximation, is the winner. "

    But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?

  • So, what will AGI be able to do that will make that bet pay off? Human-like intelligence is already very common. Vastly better than human intelligence seems like it would be worth the expense, but I don't know where we'd get suitable training data.

    • The bet is that they perfect a new kind of neural network which is roughly as good at "training" as the human mind is as far as "amount learned/experience gained per bit of information input".

      Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).

      That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).

      It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.

      Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.

      That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)

  • Are these investors high? Or just insane?

    • Finance professor Aswath Damodaran, and others, have made many useful insights as to how AI as an investment is likely to pay out.

      One technique is, instead of trying to pick individual winners, look at the total addressable market. Then compare the market size with the capital being pumped in. If you look on this basis, Aswath concluded that collectively AI investment is likely to provide unsatisfactory returns.

      Here's a recent headline: "Nvidia’s Jensen Huang thinks $1 trillion won’t be enough to meet AI demand—and he’s paying engineers in AI tokens worth half their salary to prove it"

      There are two parts to this. 1. A staggering $1t is expected to be invested in AI. Someone worked out that this was more than the entire capital expenditure for companies like Apple. We're talking about its entire existence here. IOW, $1t is a lot of dough. A LOT.

      Secondly, this whole notion that AI is such a sure thing that half the salary will be in tokens should ring alarm bells. '“I could totally imagine in the future every single engineer in our company will need an annual token budget,” he said. “They’re going to make a few 100,000 a year as their base pay. I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10 times.”'

      I recall from the dotcom fiasco that service companies like accountants and lawyers were providing services to the dotcom companies and being compensated in stock options rather than cold hard cash like you'd normally expect.

      Very dangerous.

      As another poster pointed out, this really boils down to FOMO by big tech. I'm expecting big trouble down the line. We await to see if I'm early or just plain wrong.

    • Neither. It's the most severe FOMO in history. The best case scenario is equivalent to attempting to pick future winners just prior to the industrial revolution really kicking off. Except this time around the technological timelines appear to be severely compressed and everyone is fully aware of what's at stake. And again, that's the best case scenario.

  • This depends on a fantasy cascade of functional consequences of AGI, whatever that acronym even means anymore.

    It is just cargo cult financing at this point.

2 years? 2 years ago, gpt-4o was OpenAI's flagship model. The gap is real, but much smaller than 2 years.

I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore.

  • This is a common canard. AI already autonomously self improves. All the training pipelines for modern frontier models are filled with AI. AI generates synthetic data, it cleans data, it judges output quality and feeds back via RL, it does hyperparameter tuning, it rewrites kernels for speed and a thousand other things.

    But: no singularity. At least not yet.

    The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.

  • That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.

    • If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time.

      5 replies →

  • But if the second AI that can self improve comes up?

    Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.

  • If that happens catching up will be meaningless, everything we know and care about will change. You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog. There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

    • > There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

      It seems pretty wild to bet the future on such an assumption. What are you even basing it on?

      1 reply →