← Back to context

Comment by doubleunplussed

3 days ago

> "recursive self improvement" does not imply "self improvement without bounds"

Obviously not, but thinking that the bounds are going to lie in between where AI intelligence is now and human intelligence I think is unwarranted - as mentioned, humans are unlikely to be the peak of what's possible since evolution did not optimise us for intelligence alone.

If you think the recursive self-improvement people are arguing for improvement without bounds, I think you're simply mistaken, and it seems like you have not made a good faith effort to understand their view.

AI only needs to be somewhat smarter than humans to be very powerful, the only arguments worth having IMHO are over whether recursive self-improvement will lead to AI being a head above humans or not. Diminishing returns will happen at some point (in the extreme due to fundamental physics, if nothing sooner), but whether it happens in time to prevent AI from becoming meaningfully more powerful than humans is the relevant question.

> we do not have a definition of intelligence

This strikes me as an unserious argument to make. Some animals are clearly more intelligent than others, whether you use a shaky definition or not. Pick whatever metric of performance on intellectual tasks you like, there is such a thing as human-level performance, and humans and AIs can be compared. You can't even make your subsequent arguments about AI performance being made worse by various factors unless you acknowledge such performance is measuring something meaningful. You can't even argue against recursive self-improvement if you reject that there is anything measurable that can be improved. I think you should retract this point as it prevents you making your own arguments.

> There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure.

I'm pretty confused by this claim - whatever our difficulties defining intelligence, "resembling humans" is not it. Do you not believe there are tasks on which performance can be objectively graded beyond similarity to humans? I think it's quite easy to define tasks that we can judge the success of without being able to do it ourselves. If AI solves all the Millennium Prize Problems, that would be amazing! I don't need to have resolved all issues with a definition of intelligence to be impressed.

Anyway, is there really no evidence? AI having improved so far is not any evidence that it might continue, even a little bit? Are we really helpless to predict whether there will be any better chatbots released in the remainder of this year than we already have?

I do not think we are that helpless - if you entirely reject past trends as an indicator of future trends, and treat them as literally zero evidence at all, then this is simply faulty reasoning. Past trends are not a guarantee of future trends, but neither are they zero evidence. They are a nonzero medium amount of evidence, the strength of which depends on how long the trends have been going on and how well we understand the fundamentals driving them.

> thinking clearly is about the reasoning, not the conclusion.

And I think we have good arguments! You seem to have strong priors that the default is that machines can't reach human intelligence/performance or beyond, and you really need convincing otherwise. I think the fact that we have an existence proof in humans of human intelligence and an algorithm to get there proves it's possible. And I consider it quite unlikely that humans are the peak of intelligence/performance-on-whatever-metrics that is possible given it's now what we were optimised for specifically.

All your arguments about why progress might slow or stop short of superhuman-levels are legitimate and can't be ruled out, and yet these things have not been limiting factors so far despite that they would have been equally valid to make these arguments any time in the past few years.

> no legitimate argument has been presented that implies the conclusion

I mean it's probabilistic, right? I'm expecting something like an 85% chance of AGI before 2040. I don't think it's guaranteed, but when you look at progress so far, and that nature gives us proof (in the form of the human brain) that it's not impossible in any fundamental way, I think that's reasonable. Reasonability arguments and extrapolations are all we have, we can't imply anything definitively.

You think what probability?

Interested in a bet?