← Back to context

Comment by sebastiennight

17 days ago

> Recursive self-improvement doesn't get around this problem. Where does it get the data for next iteration? From interactions with humans.

It wasn't true for AlphaGo, and I see no reason it should be true for a system based on math. It makes sense that a talented mathematician who's literally made of math, could build a slightly better mathematician, and so on.

AlphaGo was able to recursively self-improve within the domain of the game of go, which has an astonishingly small set of rules.

We're asking AIs to have data that covers the real physical world, plus pretty much all of human society and culture. Doing self-improvement on that without external input is a fundamentally different proposition than doing it for go.

  • That is a valid argument. I do think that

    > the real physical world, plus pretty much all of human society and culture

    is only a tiny part of the problem (more data plus understanding more rules) and the main problem is "getting smarter".

    You can get smarter without learning more about the world or human society and culture. I mean, that's allegedly how Blaise Pascal worked out a lot of mathematics in his teenage years.

    My point is that the "getting smarter" part (not book-smart which is your physical world data, not street-smart which is your human culture data, but better-at-processing-and-solving-problems smart) is made of math. And using math to make that part better is the self-improvement that does not necessarily require human input.

    • Your math point is proven wrong, with math. The argument goes like this:

      1. AI is a computer program.

      2. Some math is not solvable with any computer program.

      3. Therefore, there are limits to what AI can do with math.

      I recommend you to read this lovely paper about Busy Beaver numbers by Scott Aaronson. [1]

      [1]: https://www.scottaaronson.com/papers/bb.pdf

      1 reply →