← Back to context

Comment by AstroBen

9 months ago

> Isn’t just the ability to preform a task.

Right. In this case I'd say it's the ability to interpret data and use it to succeed at whatever goals it has

Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal

Rob miles has some really good videos on AI safety research which touches on how AGI would think. Thats shaped a lot of how I think about it https://www.youtube.com/watch?v=hEUO6pjwFOo

> Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal

If it’s limited to achieving goals it’s not AGI. Real time personal goal setting based on human equivalent emotions is an “intellectual task.” One of many requirements for AGI therefore is to A understand the world in real time and B emotionally respond to it. Aka AGI would by definition “necessitate having feelings.”

There’s philosophical arguments that there’s something inherently unique about humans here, but without some testable definition you could make the same argument that some arbitrary group of humans don’t have those qualities “gingers have no souls.” Or perhaps “dancing people have no consciousness” which seems like gibberish not because it’s a less defensible argument, but because you haven’t been exposed to it before.

  • I mean we just fundamentally have different definitions of AGI. Mine's based on outcomes and what it can do, so purely goal based. Not the processes that mimic humans or animals

    I think this is the most likely first step of what would happen seeing as we're pushing for it to be created to solve real world problems

    • I’m not sure how you can argue something is a general intelligence if it can’t do those kinds of things? Comes out of the factory with a command: “Operate this android for a lifetime pretending to be human.”

      Seems like arguing something is a self driving car if it needs a backup human driver for safety. It’s simply not what people who initially came up with the term meant and not what a plain language understanding of the term would suggest.

      2 replies →