← Back to context

Comment by olalonde

21 hours ago

Something that surprises me about modern LLMs is that they're relatively smart yet lack consciousness. I used to believe that consciousness (e.g. a desire for self-preservation, intrinsic motivation) might be a necessary requirement for AGI/ASI, but it's increasingly looking like that may not be the case. If true, that's actually good news, since it makes the worst doomsday scenarios less likely.

How can you tell?

  • How can I tell what? That current LLMs are not conscious or that AGI/ASI will not require consciousness?

    • How do you know they aren't conscious of we don't know what consciousness is, and have no test to see if anyone or anything is conscious?

      This may seem like a joke, but your answer will likely be in the vain of "conscious things are obviously conscious", which gets us nowhere.

      I mean, self motivation and a desire to not be turned off can be programmed into even decades old AIs.

      15 replies →