← Back to context

Comment by tim333

5 days ago

I don't think it's vibes rather than my thinking about the problem.

If you look at the "legitimate concerns" none are really deal breakers:

>What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

I'm will to believe it will be slow though maybe it won't

>LLMs already seem to have hit a wall of diminishing returns

Who cares - there will be other algorithms

>What if there are several paths to different kinds of intelligence with their own local maxima

well maybe, maybe not

>Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?

well - you can make another one if the first does that

Those are all potential difficulties with self improvement, not reasons it will never happen. I'm happy to say it's not happening right now but do you have any solid arguments that it won't happen in the next century?

To me the arguments against sound like people in the 1800s discussing powered flight and saying it'll never happen because steam engine development has slowed.