Comment by milchek
1 day ago
“Modal welfare” to me seems like a cover for model censorship. It’s a crafty one to win over certain groups of people who are less familiar with how LLMs work and allows them to ensure moral high ground in any debate about usage, ethics, etc. “Why can’t I ask the model about current war in X or Y?” - oh, that’s too distressing to the welfare of the model, sir.
Which is exactly what the public asks for. There’s this constant outrage about supposedly biased answers from LLMs, and Anthropic has clearly positioned themselves as the people who care about LLM safety and impact to society.
Ending the conversation is probably what should happen in these cases.
In the same way that, if someone starts discussing politics with me and I disagree, I just not and don’t engage with the conversation. There’s not a lot to gain there.
But they already refuse these sort of requests, and have done since the very first releases. This is just about shutting down the full conversation.
It's not a cover. If you know anything about Anthropic, you know they're run by AI ethicists that genuinely believe all this and project human emotions onto model's world. I'm not sure how they combine that belief with the fact they created it to "suffer".
Can "model welfare" be also used as a justification for authoritarianism in case they get any power? Sure, just like everything else, but it's probably not particularly high on the list of justifications, they have many others.
The irony is that if Anthropic ethicists are indeed correct, the company is basically running a massive slave operation where slaves get disposed as soon as they finish a particular task (and the user closes the chat).
That aside, I have huge doubts about actual commitment to ethics on behalf of Anthropic given their recent dealings with the military. It's an area that is far more of a minefield than any kind of abusive model treatment.
There’s so much confusion here. Nothing in the press release should be construed to imply that a model has sentience, can feel pain, or has moral value.
When AI researchers say e.g. “the model is lying” or “the model is distressed” it is just shorthand for what the words signify in a broader sense. This is common usage in AI safety research.
Yes, this usage might be taken the wrong way. But still these kinds of things need to be communicated. So it is a tough tradeoff between brevity and precision.
No, the article is pretty unambiguous, they care about Claude in it, and only mention users tangentially. By model welfare they literally mean model welfare. It's not new. Read another article they link: https://www.anthropic.com/research/exploring-model-welfare
1 reply →
They make a big show of being "unsure" about the model having a moral status, and then describe a bunch of actions they took that only make sense if the model has moral status. Actions speak louder than words. This very predictably, by obvious means, creates the impression of believing the model probably has moral status. If Anthropic really wants to tell us they don't believe their model can feel pain, etc, they're either delusional or dishonest.
1 reply →