← Back to context

Comment by ImHereToVote

15 days ago

Doesn't ChatGPT fulfill these criteria too?

In a Chinese room sort of way, sure. The problem is we understand too well how it works, so any semblance of consciousness or self awareness we know to be simple text generation.

Again, there's no real measure for consciousness, so it's difficult to say. If you ask me, frontier models meet the definition of intelligence, but not the definition of self-awareness, so they aren't sentient regardless of whether they are conscious. This is a pretty fundamental philosophical question that's been considered for centuries, outside of the context of AI.

  • ChatGPT knows about the ChatGPT persona. Much like I know the persona I play in society and at home. I don't know what the "core" me is like at all. I don't have access to it. It seems like a void. A weird eye. No character, no opinions.

    The persona; I know very well.

    • To the extent it "knows" (using that word loosely) about the persona, it's deriving that information from its system prompt. The model itself has no awareness.

      The sooner we stop anthropomorphizing AI models, the better. It's like talking about how a database is sentient because it has extremely precise memory and recall. I understand the appeal, but LLMs are incredibly interesting and useful tech and I think that treating them as sentient beings interferes with our ability to recognize their limits and thereby fully harness their capability.

      2 replies →

its not self-aware, regardless what it tells you (see the original link)

  • I'm not sure what you're referring to in the original link, can you please paste an excerpt?

    But thinking about it - how about this, what if you have a fully embodied LLM-based robot, using something like Figure's Helix architecture [0], with a Vision-Language-Action model, and then have it look at the mirror and see itself - is that on its own not sufficient for self-awareness?

    [0] https://www.figure.ai/news/helix