Comment by doubleunplussed
5 days ago
On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it.
We have an existence proof for intelligence that can improve AI: humans.
If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.
Are people really that sceptical that AI will get to human level intelligence?
It that an insane belief worthy of being a primary example of a community not thinking clearly?
Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.
Consider that even the named phenomenon is sloppy: "recursive self improvement" does not imply "self improvement without bounds". This is the "what if you hit diminishing returns and never get past it" claim. Absolutely no justification for the jump, ever, among AI boosters.
> If AI ever gets to human-level intelligence
This picture of intelligence as a numerical scale that you just go up or down, with ants at the bottom and humans/AI at the top, is very very shaky. AI is vulnerable to this problem, because we do not have a definition of intelligence. We can attempt to match up capabilities LLMs seem to have with capabilities humans have, and if the capability is well-defined we may even be able to reason about how stable it is relative to how LLMs work.
For "reasoning" we categorically do not have this. There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure. IIRC there was a recent paper about giving LLMs more opportunity processing time, and this reduced performance. Same with adding extraneous details, sometimes that reduces performance too. What if eventually everything you try reduces performance? Totally unaddressed.
> It that an insane belief worthy of being a primary example of a community not thinking clearly?
I really need to stress this: thinking clearly is about the reasoning, not the conclusion. Given the available evidence, no legitimate argument has been presented that implies the conclusion. This does not mean the conclusion is wrong! But just putting your finger in the air and saying "the wind feels right, we'll probably have AGI tomorrow" is how you get bubbles and winters.
>"recursive self improvement" does not imply "self improvement without bounds"
I was thinking that. I mean if you look at something like AlphaGo it was based on human training and then they made one I think called AlphaZero which learned by playing against itself and got very good but not infinitely good as it was still constrained by hardware. I think with Chess the best human is about 2800 on the ELO scale and computers about 3500. I imagine self improving AI would be like that - smarter than humans but not infinitely so and constrained by hardware.
Also like humans still play chess even if computers are better, I imagine humans will still do the usual kind of things even if computers get smarter.
Also : individual ants might be quite dumb, but ant colonies do seem to be one of the smartest entities we know of.
> "recursive self improvement" does not imply "self improvement without bounds"
Obviously not, but thinking that the bounds are going to lie in between where AI intelligence is now and human intelligence I think is unwarranted - as mentioned, humans are unlikely to be the peak of what's possible since evolution did not optimise us for intelligence alone.
If you think the recursive self-improvement people are arguing for improvement without bounds, I think you're simply mistaken, and it seems like you have not made a good faith effort to understand their view.
AI only needs to be somewhat smarter than humans to be very powerful, the only arguments worth having IMHO are over whether recursive self-improvement will lead to AI being a head above humans or not. Diminishing returns will happen at some point (in the extreme due to fundamental physics, if nothing sooner), but whether it happens in time to prevent AI from becoming meaningfully more powerful than humans is the relevant question.
> we do not have a definition of intelligence
This strikes me as an unserious argument to make. Some animals are clearly more intelligent than others, whether you use a shaky definition or not. Pick whatever metric of performance on intellectual tasks you like, there is such a thing as human-level performance, and humans and AIs can be compared. You can't even make your subsequent arguments about AI performance being made worse by various factors unless you acknowledge such performance is measuring something meaningful. You can't even argue against recursive self-improvement if you reject that there is anything measurable that can be improved. I think you should retract this point as it prevents you making your own arguments.
> There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure.
I'm pretty confused by this claim - whatever our difficulties defining intelligence, "resembling humans" is not it. Do you not believe there are tasks on which performance can be objectively graded beyond similarity to humans? I think it's quite easy to define tasks that we can judge the success of without being able to do it ourselves. If AI solves all the Millennium Prize Problems, that would be amazing! I don't need to have resolved all issues with a definition of intelligence to be impressed.
Anyway, is there really no evidence? AI having improved so far is not any evidence that it might continue, even a little bit? Are we really helpless to predict whether there will be any better chatbots released in the remainder of this year than we already have?
I do not think we are that helpless - if you entirely reject past trends as an indicator of future trends, and treat them as literally zero evidence at all, then this is simply faulty reasoning. Past trends are not a guarantee of future trends, but neither are they zero evidence. They are a nonzero medium amount of evidence, the strength of which depends on how long the trends have been going on and how well we understand the fundamentals driving them.
> thinking clearly is about the reasoning, not the conclusion.
And I think we have good arguments! You seem to have strong priors that the default is that machines can't reach human intelligence/performance or beyond, and you really need convincing otherwise. I think the fact that we have an existence proof in humans of human intelligence and an algorithm to get there proves it's possible. And I consider it quite unlikely that humans are the peak of intelligence/performance-on-whatever-metrics that is possible given it's now what we were optimised for specifically.
All your arguments about why progress might slow or stop short of superhuman-levels are legitimate and can't be ruled out, and yet these things have not been limiting factors so far despite that they would have been equally valid to make these arguments any time in the past few years.
> no legitimate argument has been presented that implies the conclusion
I mean it's probabilistic, right? I'm expecting something like an 85% chance of AGI before 2040. I don't think it's guaranteed, but when you look at progress so far, and that nature gives us proof (in the form of the human brain) that it's not impossible in any fundamental way, I think that's reasonable. Reasonability arguments and extrapolations are all we have, we can't imply anything definitively.
You think what probability?
Interested in a bet?
> We have an existence proof for intelligence that can improve AI: humans.
I don't understand what you mean by this. The human brain has not meaningfully changed, biologically, in the past 40,000 years.
We, collectively, have built a larger base of knowledge and learned to cooperate effectively enough to make large changes to our environment. But that is not the same thing as recursive self-improvement. No one has been editing our genes or performing brain surgery on children to increase our intelligence or change the fundamental way it works.
Modern brains don't work "better" than those of ancient humans, we just have more knowledge and resources to work with. If you took a modern human child and raised them in the middle ages, they would behave like everyone else in the culture that raised them. They would not suddenly discover electricity and calculus just because they were born in 2025 instead of 950.
----
And, if you are talking specifically about the ability to build better AI, we haven't matched human intelligence yet and there is no indication that the current LLM-heavy approach will ever get there.
I just mean that the existence of the human brain is proof that human-level intelligence is possible.
Yes it took billions of years all said and done, but it shows that there are no fundamental limits that prevent this level of intelligence. It even proves it can in principle be done with a few tens of watts a certain approximate amount of computational power.
Some used to think the first AIs would be brain uploads, for this reason. They thought we'd have the computing power and scanning techniques to scan and simulate all the neurons of a human brain before inventing any other architecture capable of coming close to the same level of intelligence. That now looks to be less likely.
Current state of the art AI still operate with less computational power than the human brain, and they are far less efficient at learning that humans are (there is a sense in which a human intelligence takes a merely years to develop - i.e. childhood - rather than billions, this is also a relevant comparison to make). Humans can learn from far fewer examples than current AI can.
So we've got some catching up to do - but humans prove it's possible.
Culture is certainly one aspect of recursive self-improvement.
Somewhat akin to 'software' if you will.