← Back to context

Comment by layer8

14 hours ago

> any task any humans can do

That doesn’t seem accurate, even if you limit it to mental tasks. For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

Another thought: The way human perform tasks is affected by involuntary aspects of the respective individual mind, in a way that the involuntariness is relevant (for example being repulsed by something, or something not crossing one’s mind). If it is involuntary for the AGI as well, then it can’t perform tasks in all the different ways that different humans would. And if it isn’t involuntary for the AGI, can it really reproduce the way (all the ways) individual humans would perform a task? To put it more concretely: For every individual, there is probably a task that they can’t perform (with a specific outcome) that however another individual can perform. If the same is true for an AGI, then by your definition it isn’t an AGI because it can’t perform all tasks. On the other hand, if we assume it can perform all tasks, then it would be unlike any individual human, which raises the question of whether this is (a) possible, and (b) conceptually coherent to begin with.

The biggest issue with AGI is how poorly we've described GI up until now.

Moreso, I see an AI that can do any (intelligence) task a human can will be far beyond human capabilities because even individual humans can't do everything.

> For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

Do you mind sharing the kinds of descriptive criteria for these behaviors that you are envisioning for which there is overlap with the general assumption of them occurring in a machine? I can foresee a sort of “featherless biped” scenario here without more details about the question.

> For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

...Yes. This is what I think 'most' people consider a real AI to be.

> For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

How would you know if it could? How do you know that other human beings can? You don’t.