← Back to context

Comment by madeofpalk

12 hours ago

LLMs will not give us "artificial general intelligence", whatever that means.

AGI currently is an intentionally vague and undefined goal. This allows businesses to operate towards a goal, define the parameters, and relish in the “rocket launches”-esque hype without leaving the vague umbrella of AI. It allows businesses to claim a double pursuit. Not only are they building AGI but all their work will surely benefit AI as well. How noble. Right?

It’s vagueness is intentional and allows you to ignore the blind truth and fill in the gaps yourself. You just have to believe it’s right around the corner.

  • "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t." - without trying to defend such business practice, it appears very difficult to define what are necessary and sufficient properties that make AGI.

    • What about if the human brain were so complex that we could be complex enough to understand it?

In my opinion it's probably closer to real agi then it's not. I think the missing piece is learning after the pretraining phase.

An AGI will be able to do any task any humans can do. Or all tasks any human can do. An AGI will be able to get any college degree.

  • > any task any humans can do

    That doesn’t seem accurate, even if you limit it to mental tasks. For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

    Another thought: The way human perform tasks is affected by involuntary aspects of the respective individual mind, in a way that the involuntariness is relevant (for example being repulsed by something, or something not crossing one’s mind). If it is involuntary for the AGI as well, then it can’t perform tasks in all the different ways that different humans would. And if it isn’t involuntary for the AGI, can it really reproduce the way (all the ways) individual humans would perform a task? To put it more concretely: For every individual, there is probably a task that they can’t perform (with a specific outcome) that however another individual can perform. If the same is true for an AGI, then by your definition it isn’t an AGI because it can’t perform all tasks. On the other hand, if we assume it can perform all tasks, then it would be unlike any individual human, which raises the question of whether this is (a) possible, and (b) conceptually coherent to begin with.

    • The biggest issue with AGI is how poorly we've described GI up until now.

      Moreso, I see an AI that can do any (intelligence) task a human can will be far beyond human capabilities because even individual humans can't do everything.

    • > For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

      Do you mind sharing the kinds of descriptive criteria for these behaviors that you are envisioning for which there is overlap with the general assumption of them occurring in a machine? I can foresee a sort of “featherless biped” scenario here without more details about the question.

    • > For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

      How would you know if it could? How do you know that other human beings can? You don’t.

    • > For example, do we expect an AGI to be able to meditate, or to mentally introspect itself like a human, or to describe its inner qualia, in order to constitute an AGI?

      ...Yes. This is what I think 'most' people consider a real AI to be.

      2 replies →

it must be wonderful to live life with such supreme unfounded confidence. really, no sarcasm, i wonder what that is like. to be so sure of something when many smarter people are not, and when we dont know how our own intelligence fully works or evolved, and dont know if ANY lessons from our own intelligence even apply to artificial ones.

and yet, so confident. so secure. interesting.

  • Social media doesn't punish people for overconfidence. In fact social media rewards people's controversial statements by giving them engagement - engagement like yours.