Comment by augusteo
2 days ago
Building on zemo's point about parasocial relationships: traditional parasocial interaction involves a performer who doesn't know you exist. Here the AI does respond to you specifically, which changes the dynamic.
Is it still parasocial if the other party is responsive but not conscious? Or is this something new that we don't have good language for yet?
I think “parasocial” still captures part of it (one-to-many distribution, performer vibe), but there’s also a true interactive dyad here. It’s closer to “synthetic social interaction” or “responsive parasocial.” I don’t have a perfect word yet, but the asymmetry and the responsiveness both matter.
You need to first prove that AI is not conscious.
I find it hard to even convince others that I am a conscious person.
Maybe consciousness is just a matter of belief, if I see this AI and believe that it's a person, then I am talking to a conscious entity.
I’m not trying to make any claims about consciousness. For us, the practical question is: does the interaction feel supportive and useful, while staying transparent that it’s a model. The rest is philosophy, and I’m happy to read more perspectives.
Give it access to a terminal and see what it does, unprompted. Does it explore? Does it develop interests? Does it change when exposed to new information?
We’re not giving it unconstrained tool access. In-product, actions are either not available or gated behind explicit user intent and strict allowlists. The interesting part for us is the real-time conversational loop and memory personalization, not autonomous exploration.
>Does it change when exposed to new information?
By this metric most humans are not conscious.
[flagged]
Yeah the “rocks are alive and conscious” crowd are certainly something
I think maybe there needs to be a new word. It's still an asymmetric relationship. It's kind of a mix of DMing an influencer and chatting with the barista because you think she actually likes you. You're talking to a mirage.