Comment by nkohari
15 days ago
To the extent it "knows" (using that word loosely) about the persona, it's deriving that information from its system prompt. The model itself has no awareness.
The sooner we stop anthropomorphizing AI models, the better. It's like talking about how a database is sentient because it has extremely precise memory and recall. I understand the appeal, but LLMs are incredibly interesting and useful tech and I think that treating them as sentient beings interferes with our ability to recognize their limits and thereby fully harness their capability.
Not the parent, but I understood it as them saying that the model has as part of its training data many conversations that older versions of itself had with people, and many opinion pieces about it. In that sense, ChatGPT learns about itself by analyzing how its "younger self" behaved and was received, not entirely unlike how a human persona/ego is (at least in part) dependent on such historical data.
I mean it in the way an Arduino knows a gas leak is happening. I similarly like the Arduino, I know about my persona that I perform. I'm not anthropomorphizing the Arduino. If anything, I'm mechamorphizing me.