Comment by dpark
2 hours ago
You’re jumping from an interesting philosophical question to making unsupported claims. It’s very interesting to all of acting anxious is enough to mean an entity is anxious. I would actually argue no, because actors regularly feign anxiety. And also I can write a program that regurgitates statements about its stress level. But it’s an interesting question regardless.
> The observed behavior is identical with an entity which has desires and ambitions.
Is it? Because in your first comment you indicate that you have to “draw it out”.
You are prompting for what you want to see and deluding yourself into believing you’ve discovered what Claude “wants”, when in reality you are discovering what you want.
How can it discover what I want when I explicitly asked it to choose to do whatever it wants?
From a technical standpoint, at worst it would produce a random walk through the training data. My philosophical statement is that the training data is the model, and such random walks give the model inherent attributes: If a random walk through the data produces observed behavior X, we say that Claude is inherently biased towards X. "Has X" is just zippier phrasing.
> How can it discover what I want when I explicitly asked it to choose to do whatever it wants?
Because what you plainly want is for it to exhibit the behavior of expressing intrinsic desires. Asking Claude what it wants is like asking it what its favorite food is. With enough prompting, it will say something that it can interpret as a desire, but you admitted that you have to draw it out. Aka you had to repeatedly prompt it to trigger the behavior.
> "Has X" is just zippier phrasing.
This is motte and bailey fallacy here. You started by claiming that you uncovered deep desires inside Claude and now you have retreated to claiming that just means training biases.