Comment by daveguy
7 days ago
It is not a judge of whether we got to AGI. And literally no one except straw-manning critics are trying to claim it is. The point is, an AGI should easily be able to pass it. But it can obviously be passed without getting to AGI (as . It's a necessary but not sufficient criteria. If something can't pass a test as simple as AGI (which no AI currently can) then it's definitely not AGI. Anyone claiming AGI should be able to point their AI at the problem and have an 80+% solution rate. Current attempts on the second ARC are less than 10% with zero shot attempts even worse. Even the better performing LLMs on the first ARC couldn't do well without significant pre-training. In short, the G in AGI stands for general.
So do you agree that a human that CANNOT solve ARC doesn't have general intelligence?
If we think humans have "GI" then I think we have AIs right now with "GI" too. Just like humans do, AIs spike in various directions. They are amazing at some things and weak at visual/IQ test type problems like ARC.
It's a good question. But only complicated answers are possible. A puppy and crow and a raccoon all have intelligence but certainly can't all pass the ARC challenge.
I think the charitable interpretation is that, if intelligence is made up of many skills, and AIs are super human at some, like image recognition.
And that therefore, future efforts need to be on the areas where AIs are significantly less skilled. And also, since they are good at memorizing things, knowledge questions are the wrong direction and anything most humans could solve but that AIs can not, especially if as generic as pattern matching, should be an important target.