← Back to context

Comment by Vegenoid

3 days ago

> If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?

Humans have a lot more going on than just an intelligence brain. The two big ones are: bodies, with which to richly interact with reality, and emotions/desire, which drive our choices. The one that I don't think gets enough attention in this discussion is the body. The body is critical to our ability to interact with the environment, and therefore learn about it. How does an AI do this without a body? We don't have any kind of machine that comes close to the level of control, feedback, and adaptability that a human body offers. That seems very far away. I don't think that an AI can just "improve itself" without being able to interact with the world in many ways and experiment. How does it find new ideas? How does it test its ideas? How does it test its abilities? It needs an extremely rich interface with the physical world, that external feedback is necessary for improvement. That requirement would put the prospect of a recursive self-improving AI much further into the future than many rationalists believe.

And of course, the "singularity" scenario does not only make "recursive self-improvement" the only assumption, it assumes exponential recursive self-improvement all the way to superintelligence. This is highly speculative. It's just as possible that the curve is more logarithmic, sinusoid, or linear. The reason to believe that fully exponential self-improvement is the likely scenario, based on curve of some metric now that hasn't existed for very long, does not seem solid enough to justify a strong belief. It is just as easy to imagine that intelligence gains get harder and harder as intelligence increases. We see many things that are exponential for a time, and then they aren't anymore, and basing big decisions on "this curve will be exponential all the way" because we're seeing exponential progress now, at the very early stages, does not seem sound.

Humans have human-level intelligence, but we are very far away from understanding our own brain such that we can modify it to increase our capacity for intelligence (to any degree significant enough to be comparable to recursive self-improvement). We have to improve the intelligence of humanity the hard way: spend time in the world, see what works, the smart humans make more smart humans (as do the dumb humans, which often slows the progress of the smart humans). The time spent in the world, observing and interacting with it, is crucial to this process. I don't doubt that machines could do this process faster than humans, but I don't think it's at all clear that they could do so, say, 10,000x faster. A design needs time in the world to see how it fares in order to gauge its success. You don't get to escape this until you have a perfect simulation of reality, which if it is possible at all is likely not possible until the AI is already superintelligent.

Presumably a superintelligent AI has a complete understanding of biology - how does it do that without spending time observing the results of biological experiments and iterating on them? Extrapolate that to the many other complex phenomena that exist in the physical world. This is one of the reasons that our understanding of computers has increased so much faster than our understanding of many physical sciences: to understand a complex system that we didn't create and don't have a perfect model of, we must do lots of physical experiments, and those experiments take time.

The crucial assumption that the AI singularity assumption relies on is that once intelligence hits a certain threshold, it can gaze at itself and self-improve to the top very quickly. I think this is fundamentally flawed, as we exist in a physical reality that underlies everything and defines what intelligence is. Interaction and experimentation with reality is necessary for the feedback loop of increasing intelligence, and I think this both severely limits how short that feedback loop can be, and makes the bar for an entity that can recursively self-improve itself much higher, as it needs a physical embodiment far more complex and autonomous than any robot we've managed to make.