← Back to context

Comment by ngruhn

13 hours ago

I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore.

This is a common canard. AI already autonomously self improves. All the training pipelines for modern frontier models are filled with AI. AI generates synthetic data, it cleans data, it judges output quality and feeds back via RL, it does hyperparameter tuning, it rewrites kernels for speed and a thousand other things.

But: no singularity. At least not yet.

The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.

That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.

  • If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time.

    • Humans ultimately judge their output by comparison and competition. When we get to the point an AI is capable of participating on the market directly, it'll no longer make sense to proxy judgement through humans anymore.

      2 replies →

But if the second AI that can self improve comes up?

Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.

If that happens catching up will be meaningless, everything we know and care about will change. You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog. There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

  • > There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

    It seems pretty wild to bet the future on such an assumption. What are you even basing it on?

    • Because any goal can be better achieved if you're under fewer constraints. We're building super powerful agentic problem solving machines. Give them literally any complex goal. Breaking out of the sandbox is a useful subtask to increase their options.