Comment by roenxi
7 days ago
By both definitions of intelligence in the presentation we should be saying "how we got to AGI" in the past tense. We're already there. AI's can deal with situations they weren't prepared for in any sense that a human can. They might not do well, but they'll have a crack at it. We can trivially build systems that collect data and do a bit more offline training if that is what someone wants to see, but there doesn't really seem to be a commercial need for that right now. Similarly, AIs can whip most humans at most domains that require intelligence.
I think the debate hqas been flat-footed by the speed all this happened. We're not talking AGI any more, we're talking about how to build superintelligences hitherto unseen in nature.
According to this presentation at least, ARC-AGI-2 shows that there is a big meaningful gap in fluid intelligence between normal non-genius humans and the best models currently, which seems to indicate we are not "already there".
If there is a gap in intelligence between two humans, does that mean to you that one of them is necessarily not a general intelligence? The current crop of AIs get some of the questions right by reasoning through them. That means they are already intelligent in the way ARC-AGI-2 measures intelligence. They just aren't very capable ones.
If AI at least equal humans in all intellectual fields then they are super-intelligences, because there are already fields where they dominate humans so outrageously there isn't a competition (nearly all fields, these days). Before they are superintelligences there is a phase where they are just AGIs, we've been in that phase for a while now. Artificial superintelligence is very exciting, but Artificial non-super Intelligence or AGI is here with us in the present.
You can define AGI however you want I suppose, but I would consider it achieved when AI can achieve at least about median human performance on all cognitive tasks. Obviously computers are useful well before this point, but it is clearly meaningful line in the sand, useful enough to merit having a dedicated name like "AGI". Constructed tasks like ARC-AGI simply quantify what everyone can already see, which is that current models cannot be used as a drop-in replacement for humans in most cases.
To me, superintelligence means specifically either dominating us in our highest intellectual accomplishments, i.e. math, science, philosophy or literally dominating us via subordinating or eliminating humans. Neither of these things have happened at all.
1 reply →
There's already a big meaningful gap between the things AIs can do which humans can't, so why do you only count as "meaningful" the things humans can do which AIs can't?
I enjoy seeing people repeatedly move the goalposts for "intelligence" as AIs simply get smarter and smarter every week. Soon AI will have to beat Einstein in Physics, Usain Bolt in running, and Steve Jobs in marketing to be considered AGI...
> There's already a big meaningful gap between the things AIs can do which humans can't, so why do you only count as "meaningful" the things humans can do which AIs can't?
Where did I say there was nothing meaningful about current capabilities? I'm saying that's what is novel about a claim of "AGI" (as opposed to a claim of "computer does something better than humans", which has been an obviously true statement since the ENIAC) is the ability to do at some level everything a normal human intelligence can do.
Well, there is also robotics, active inference, online learning, etc. Things animals can do well.
Current robots perform very badly on my patented and highly scientific ROACH-AGI benchmark - "is this thing smarter at navigating unfamiliar 3D spaces than a cockroach?"