Comment by gtech1
7 days ago
This may be a silly question, I'm no expert. But why not simply define as AGI any system that can answer a question that no human can. So for example, ask AGI to find out, from current knowledge, how to reconcile gravity and qed.
Computers can already do a lot of things that no human can though. They can reliably find the best chess or go move better than a human.
It's conceivable (though not likely) that given training enough training in symbolic mathematics and some experimental data, an LLM-style AI could figure out a neat reconciliation of the two theories. I wouldn't say that makes it AGI though. You could achieve that unification with an AI that was limted to mathematics rather than being something that can function in many domains like a human can.
Wouldn't this unification need to be backed by empirical data? Let's say the AI discovers the two theories can be unified using let's say some configuration 8 spatial dimensions and 2 time dimensions. Neat trick, but how do we know the world actually has those dimensions?
Do we even have any other theory that does that already ? It seems that even finding one would be a great achievement
That would be ASI I think.
But consider: technically AlphaTensor found new algorithms no human did before (https://en.wikipedia.org/wiki/Matrix_multiplication_algorith...). So isn't it AGI by your definition of answering a question no human could before: how to do 4x4 matrix multiplication in 47 steps?
Aside from other objections already mentioned, your example would require feasible experiments for verification, and likely the process of finding a successful theory of quantum gravity requires a back and forth between experimenters and theorists.
"What is the meaning of life, the universe, and everything?"
42