Comment by Retric

9 months ago

The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.

Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.

Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. Generalization ability and Common Sense Knowledge [1]

If we go by this definition then there's no caring, or a noticing of drudgery? It's simply defined by its ability to generalize solving problems across domains. The narrow AI that we currently have certainly doesn't care about anything. It does what its programmed to do

So one day we figure out how to generalize the problem solving, and enable it to work on a million times harder things.. and suddenly there is sentience and suffering? I don't see it. It's still just a calculator

1- https://cloud.google.com/discover/what-is-artificial-general...

  • It's really hard to picture general intelligence that's useful that doesn't have any intrinsic motivation or initiative. My biggest complaint about LLMs right now is that they lack those things. They don't care even if they give you correct information or not and you have to prompt them for everything! That's not anything close to AGI. I don't know how you get to AGI without it developing preferences, self-motivation and initiative, and I don't know how you then get it to effectively do tasks that it doesn't like, tasks that don't line up with whatever motivates it.

  • “ability to understand”

    Isn’t just the ability to preform a task. One of the issues with current AI training is it’s really terrible at discovering which aspects of the training data are false and should be ignored. That requires all kinds of mental tasks to be constantly active including evaluating emotional context to figure out if someone is being deceptive etc.

    • > Isn’t just the ability to preform a task.

      Right. In this case I'd say it's the ability to interpret data and use it to succeed at whatever goals it has

      Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal

      Rob miles has some really good videos on AI safety research which touches on how AGI would think. Thats shaped a lot of how I think about it https://www.youtube.com/watch?v=hEUO6pjwFOo

      5 replies →

  • Exactly. It's called artificial general intelligence, not human general intelligence.

    • Something can’t “Operate this cat Android, pretending to be a cat.” if it can’t do what I described.

      A single general intelligence needs to be able to fly an aircraft, get a degree, run a business, and raise a baby to adulthood just like a person or it’s not general.

      10 replies →