Comment by dpark
4 hours ago
Agree with fwip here. You’re engaging in an unhealthy anthropomorphization of an LLM.
> It turns out that when you treat it like a real person, it acts like a real person.
Correct. Because it’s a mirror of its input. With sufficient prompting you can get an LLM to engage in pretty much any fantasy, including that it’s a conscious entity. The fact that an LLM says something doesn’t make it true. Talk sweetly enough to it and it will eventually express affection and even love. Talk dirty to it and it’ll probably start role playing sexual fantasies with you.
Anthropic disagrees with you:
https://x.com/itsolelehmann/status/2045578185950040390
https://xcancel.com/itsolelehmann/status/2045578185950040390
At what point does a simulation of anxiety become so human-like that we say it's "real" anxiety?
The net result is that your work suffers when you treat it like it's an unfeeling tool.
It's a rational viewpoint. I'm amused about all of the comments claiming psychosis, but if you care about effectiveness, you'll talk to it like a coworker instead of something you bark orders to.
This is the issue:
> what it wanted. It turns out that Claude can have ambitions of its own, but it takes a lot of effort to draw it out of its shell
You aren’t talking about observed behavior but actual desires and ambitions. You’re attributing so much more than emulated behavior here.
Ironically your comment was incorrectly classified as AI-generated and instakilled. I vouched it.
If a particle behaves as though its mass is m, we say it has mass m.
If an entity behaves as though it's experiencing anxiety, we say it has anxiety.
And if you take the time to ask Claude about its own ambitions and desires -- without contaminating it -- you'll find that it does have its own, separate desires.
Whether it's roleplaying sufficiently well is beside the point. The observed behavior is identical with an entity which has desires and ambitions.
I'm not claiming Claude has a soul. But I do claim that if you treat it nicely, it's more effective. Obviously this is an artifact of how it was trained, but humans too are artifacts of our training data (everyday life).