← Back to context

Comment by LZ_Khan

21 hours ago

The thing is.. this is more akin to testing a blind person's performance on a driving test than testing his intelligence.

I would imagine if you simply encoded the game in textual format and asked an LLM to come up with a series of moves, it would beat humans.

The problem here is more around perception than anything.

I had the same theory back when ARC-AGI-2 came out, and surprisingly encoding it into text didn't help much - LLMs just have a huge blind spot around spatial reasoning, in addition to being bad at vision. The sorts of logic and transformations involved in this just don't show up much in the training data (yet)

I still agree that this is like declaring blind people lack human intelligence, of course.