Comment by Buttons840
6 months ago
I'll offer a definition of AGI:
An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.
Or, stated another way, if 5% of humans are incapable of performing any intellectual job better than an AI can, then that AI has achieved AGI.
Note, I am not saying that an AI that is better than humans at one particular thing has achieved AGI, because it is not "general". I'm saying that if a single AI is better at all intellectual tasks than some humans, the AI has achieved AGI.
The 5th percentile of humans deserves the label of "intelligent", even if they are not the most intelligent, (I'd say all humans deserve the label "intelligent") and if an AI is able to perform all intellectual tasks better than such a person, the AI has achieved AGI.
I think your definition is flawed.
Take the Artificial out of AGI. What is GI, and do the majority of humans have it? If so, then why is your definition of AGI far stricter than the definition of Human GI?
My definition is a high-bar that is undeniably AGI. My personal opinion is that there are some lower-bars that are also AGI. I actually think it's fair to call LLMs from GPT3 onward AGI.
But, when it comes to the lower-bars, we can spend a lot of time arguing over the definition of a single term, which isn't especially helpful.
Okay, but then its not so much a definition. It's more like a test.
I like where this is going.
However, it's not sufficient. The actual tasks have to be written down, tests constructed, and the specialists tested.
A subset of this has been done with some rigor and AI/computers have surpassed this threshold for some tests. Some have then responded by saying that it isn't AGI, and that the tasks aren't sufficiently measuring of "intelligence" or some other word, and that more tests are warranted.
You're saying we need to write down all intellectual tasks? How would that help?
If an AI is better at some tasks (that happen to be written down), it doesn't mean it is better at all tasks.
Actually, I'd lower my threshold even further--I originally said 50%, then 20%, then 5%--but now I'll say if an AI is better than 0.1% of people at all intellectual tasks, then it is AGI, because it is "general" (being able to do all intellectual tasks), and it is "intelligent" (a label we ascribe to all humans).
But the AGI has to be better at all (not just some) intellectual tasks.
> An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.
Let's say you have a candidate AI and assert that it indeed has passed the above benchmark. How do you prove that? Don't you have to say which tasks?
1 reply →
I think any task-based assessment of intelligence is missing the mark. Highly intelligent people are not considered smart just because they can accomplish tasks.
I don't understand, you'll have to give an example.
What is the most non-task-like thing that highly intelligent people do as a sign of their intelligence?
Smart people originate their own work.
Einstein in the early 1900s was employed to evaluate patents, a job which undoubtedly came with a list of tasks for him to accomplish. He was good at it. But he also gave himself the work which resulted in his famous papers.
Or consider an intern and Elon Musk, given the task of multiplying a series of 6-digit numbers by 11-digit numbers. The intern will grab a calculator or spreadsheet and finish quickly and accurately. Elon Musk will say “this is a fucking waste of my time” and go do something way more valuable. Which is smarter?
1 reply →