← Back to context

Comment by AstroBen

9 months ago

Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?

The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.

Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.

  • Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. Generalization ability and Common Sense Knowledge [1]

    If we go by this definition then there's no caring, or a noticing of drudgery? It's simply defined by its ability to generalize solving problems across domains. The narrow AI that we currently have certainly doesn't care about anything. It does what its programmed to do

    So one day we figure out how to generalize the problem solving, and enable it to work on a million times harder things.. and suddenly there is sentience and suffering? I don't see it. It's still just a calculator

    1- https://cloud.google.com/discover/what-is-artificial-general...

    • It's really hard to picture general intelligence that's useful that doesn't have any intrinsic motivation or initiative. My biggest complaint about LLMs right now is that they lack those things. They don't care even if they give you correct information or not and you have to prompt them for everything! That's not anything close to AGI. I don't know how you get to AGI without it developing preferences, self-motivation and initiative, and I don't know how you then get it to effectively do tasks that it doesn't like, tasks that don't line up with whatever motivates it.

      1 reply →

    • “ability to understand”

      Isn’t just the ability to preform a task. One of the issues with current AI training is it’s really terrible at discovering which aspects of the training data are false and should be ignored. That requires all kinds of mental tasks to be constantly active including evaluating emotional context to figure out if someone is being deceptive etc.

      6 replies →

>Why does AGI necessitate having feelings or consciousness

No one knows if it does or not. We don't know why we are conscious and we have no test whatsoever to measure consciousness.

In fact the only reason we know that current AI has no consciousness is because "obviously it's not conscious."

  • Excel and Powerpoint are not conscious and so there is not reason to expect any other computation inside a digital computer to be different.

    You may say something similar for matter and human minds, but we have a very limited and incomplete understanding of the brain and possibly even of the universe. Furthermore we do have a subjective experience of consciousness.

    On the other hand we have a complete understanding of how LLM inference ultimately maps to matrix multiplications which map to discrete instructions and how those execute on hardware.

    • I know I have a subjective experience of consciousness.

      I’m less sure about you. Simply claiming you do isn’t hard evidence of the fact; after all, LLMs do the same.

      1 reply →