← Back to context

Comment by tmvphil

7 days ago

According to this presentation at least, ARC-AGI-2 shows that there is a big meaningful gap in fluid intelligence between normal non-genius humans and the best models currently, which seems to indicate we are not "already there".

If there is a gap in intelligence between two humans, does that mean to you that one of them is necessarily not a general intelligence? The current crop of AIs get some of the questions right by reasoning through them. That means they are already intelligent in the way ARC-AGI-2 measures intelligence. They just aren't very capable ones.

If AI at least equal humans in all intellectual fields then they are super-intelligences, because there are already fields where they dominate humans so outrageously there isn't a competition (nearly all fields, these days). Before they are superintelligences there is a phase where they are just AGIs, we've been in that phase for a while now. Artificial superintelligence is very exciting, but Artificial non-super Intelligence or AGI is here with us in the present.

  • You can define AGI however you want I suppose, but I would consider it achieved when AI can achieve at least about median human performance on all cognitive tasks. Obviously computers are useful well before this point, but it is clearly meaningful line in the sand, useful enough to merit having a dedicated name like "AGI". Constructed tasks like ARC-AGI simply quantify what everyone can already see, which is that current models cannot be used as a drop-in replacement for humans in most cases.

    To me, superintelligence means specifically either dominating us in our highest intellectual accomplishments, i.e. math, science, philosophy or literally dominating us via subordinating or eliminating humans. Neither of these things have happened at all.

    • > but I would consider it achieved when AI can achieve at least about median human performance on all cognitive tasks

      What do you consider below-median humans? Are they meat-zombies? General intelligence is at least somewhere near the minimum of human performance - and it wouldn't be a surprise to me if people performing at that level can't do the ARC AGI test either.

There's already a big meaningful gap between the things AIs can do which humans can't, so why do you only count as "meaningful" the things humans can do which AIs can't?

I enjoy seeing people repeatedly move the goalposts for "intelligence" as AIs simply get smarter and smarter every week. Soon AI will have to beat Einstein in Physics, Usain Bolt in running, and Steve Jobs in marketing to be considered AGI...

  • > There's already a big meaningful gap between the things AIs can do which humans can't, so why do you only count as "meaningful" the things humans can do which AIs can't?

    Where did I say there was nothing meaningful about current capabilities? I'm saying that's what is novel about a claim of "AGI" (as opposed to a claim of "computer does something better than humans", which has been an obviously true statement since the ENIAC) is the ability to do at some level everything a normal human intelligence can do.