Comment by Closi
10 hours ago
Completely disagree - Your definition (in my opinion) is more aligned to the concept of Artificial Super Intelligence.
Surely the 'General Intelligence' definition has to be consistent between 'Artificial General Intelligence' and 'Human General Intelligence', and humans can be generally intelligent even if they can't solve calculus equations or protein folding problems. My definition of general intelligence is much lower than most - I think a dog is probably generally intelligent, although obviously in a different way (dogs are obviously better at learning how to run and catch a ball, and worse at programming python).
I do consider dogs to have "general intelligence" however despite that I have always (my entire life) considered AGI to imply human level intelligence. Not better, not worse, just human level.
It gets worse though. While one could claim that scoring equivalently on some benchmark indicates performance at the same level - and I'd likely agree - that's not what I take AGI to mean. Rather I take it to mean "equivalent to a human" so if it utterly fails at something we're good at such as driving a car through a construction zone during rush hour then I don't consider it to have met the bar of AGI even if it meets or exceeds us at other unrelated tasks. You have to be at least as general as a stock human to qualify as AGI in my books.
Now I may be but a single datapoint but I think there are a lot of people out there who feel similarly. You can see this a lot in popular culture with AGI (or often AI) being used to refer to autonomous humanoid robots portrayed as operating at or above a human level.
Related to all that, since you mention protein folding. I consider that to be a form of super intelligence as it is more or less inconceivable that an unaided human would ever be able to accomplish such a feat. So I consider alphafold to be both super intelligent and decidedly _not_ AGI. Make of that what you will.
Pop culture has spent its entire existence conflating AGI and ‘Physical AI’, so much so that the collective realization that they’re entirely different is a relatively recent thing. Both of them were so far off in the future that the distinction wasn’t worth considering, until suddenly one of them is kinda maybe sorta roughly here now…ish.
Artificial General Intelligence says nothing about physical ability, but movies with the ‘intelligence’ part typically match it with equally futuristic biomechanics to make the movie more interesting. AGI = Skynet, Physical AI = Terminator. The latter will likely be the hardest part, not only because it requires the former first, but because you can’t just throw more watts at a stepper motor and get a ballet dancer.
That said, I’m confident that if I could throw zero noise and precise “human sensory” level sensor data at any of the top LLM models, and their output was equally coupled to a human arm with the same sensory feedback, that it would definitely outdo any current self-driving car implementation. The physical connection is the issue, and will be for a long time.
Agreed about the conflation. But that drives home that there isn't some historic commonly and widely accepted definition for AGI whose goal posts are being moved. What there was doesn't match the new developments and was also often quite flawed to begin with.
> LLM models, ... outdo any current self-driving car
How would an LLM handle computer vision? Are you implicitly including a second embedding model there? But I think that's still the wrong sort of vision data for precise control, at least in general.
How do you propose to handle the model hallucinating? What about losing its train of thought?
I think your definition of it being 'human level' is sensible - definitely a lower bar to hit than 'as long as people can do work that a robot cannot do, we don't have AGI'.
There is certainly a lot road between current technology and driving a car through a construction zone during rush hour, particularly with the same amount of driving practice a human gets.
Personally I think there could be an AGI which couldn't drive a car, but has genuine sentience - an awareness of being alive, although not necessarily the exact human experience. Maybe this isn't AGI, which more implies problem-solving and thinking rather than sentience, but in my gut if we got something sentient but that couldn't drive a car, we would still be there if that makes sense?
In theory I see what you're saying. There are physical things an octopus could conceivably do that I never could on account of our physiology rather than our intelligence. So you can contrive an analogous scenario involving only the mind where something that is clearly an AGI is incapable of some specific task and thus falls short of my definition. This makes it clear that my definition is a heuristic rather than rigorous.
Nonetheless, it's difficult to imagine a scenario where something that is genuinely human level can't adapt in the field to a novel task such as driving a car. That sort of broad adaptability is exactly what the "general" in AGI is attempting to capture (imo).