Comment by AnimalMuppet
17 days ago
AlphaGo was able to recursively self-improve within the domain of the game of go, which has an astonishingly small set of rules.
We're asking AIs to have data that covers the real physical world, plus pretty much all of human society and culture. Doing self-improvement on that without external input is a fundamentally different proposition than doing it for go.
That is a valid argument. I do think that
> the real physical world, plus pretty much all of human society and culture
is only a tiny part of the problem (more data plus understanding more rules) and the main problem is "getting smarter".
You can get smarter without learning more about the world or human society and culture. I mean, that's allegedly how Blaise Pascal worked out a lot of mathematics in his teenage years.
My point is that the "getting smarter" part (not book-smart which is your physical world data, not street-smart which is your human culture data, but better-at-processing-and-solving-problems smart) is made of math. And using math to make that part better is the self-improvement that does not necessarily require human input.
Your math point is proven wrong, with math. The argument goes like this:
1. AI is a computer program.
2. Some math is not solvable with any computer program.
3. Therefore, there are limits to what AI can do with math.
I recommend you to read this lovely paper about Busy Beaver numbers by Scott Aaronson. [1]
[1]: https://www.scottaaronson.com/papers/bb.pdf
I think you're strawmanning my math point from "if you're made of math and can make a trivial improvement in the math, you get a smarter n+1 program that can likely make another trivial improvement to n+2"... to "AI can solve all math" (which is not my point at all).
You seem to be generalizing item #3 from "there are limits to what AI can do with math", to "therefore, AI can't improve any math, and definitely not the very specific kind of math that is relevant to improving AI". That is a huge unjustified logical jump.
Has it ever happened on the path from Enigma to Claude Opus 4.6, that the necessary next step was to figure out a new nth Busy Beaver? Is Opus 4.6 a better Busy Beaver than Sonnet 3.5?
Or is that a mostly unrelated piece of math that is mostly irrelevant to making a "smarter" AI program from where we are today?