← Back to context

Comment by maaaaattttt

7 days ago

Given your premise (which I agree with) I think the issue in general comes from the lack of a good, broadly accepted definition of what AGI is. My initial comment originates from the fact that in my internal definition, an AGI would have a de facto understanding of the physics of "our world". Or better, could infer them by trial and error. But, indeed, it doesn't have to be the case. (The other advantage of the Zelda games is that they introduce new abilities that don't exist in our world, and for which most children -I've seen- understand the mechanisms and how they could be applied to solve a problem quite naturaly even they've never had that ability before).

I'd say the issue is the lack of a good, broadly accepted definition of what I is. We all know "smart" when we see it, but actually defining it in a rigorous way is tough.

  • This difficulty is interesting in and of itself.

    When people catalogue the deficiencies in AI systems, they often (at least implicitly) forgive all of our own such limitations. When someone points to something that an AI system clearly doesn't understand, they say that proves it isn't AGI. But if you point at any random human, who fails at the very same task, you wouldn't say they lack "HGI", even if they're too personally limited to ever be taught the skill.

    All of which, is to say, I don't think pointing at a limitation of an AI system, really proves it lacks AGI. It's a more slippery definition, than that.