← Back to context

Comment by nkohari

15 days ago

Again, there's no real measure for consciousness, so it's difficult to say. If you ask me, frontier models meet the definition of intelligence, but not the definition of self-awareness, so they aren't sentient regardless of whether they are conscious. This is a pretty fundamental philosophical question that's been considered for centuries, outside of the context of AI.

ChatGPT knows about the ChatGPT persona. Much like I know the persona I play in society and at home. I don't know what the "core" me is like at all. I don't have access to it. It seems like a void. A weird eye. No character, no opinions.

The persona; I know very well.

  • To the extent it "knows" (using that word loosely) about the persona, it's deriving that information from its system prompt. The model itself has no awareness.

    The sooner we stop anthropomorphizing AI models, the better. It's like talking about how a database is sentient because it has extremely precise memory and recall. I understand the appeal, but LLMs are incredibly interesting and useful tech and I think that treating them as sentient beings interferes with our ability to recognize their limits and thereby fully harness their capability.

    • Not the parent, but I understood it as them saying that the model has as part of its training data many conversations that older versions of itself had with people, and many opinion pieces about it. In that sense, ChatGPT learns about itself by analyzing how its "younger self" behaved and was received, not entirely unlike how a human persona/ego is (at least in part) dependent on such historical data.

    • I mean it in the way an Arduino knows a gas leak is happening. I similarly like the Arduino, I know about my persona that I perform. I'm not anthropomorphizing the Arduino. If anything, I'm mechamorphizing me.