← Back to context

Comment by ACCount37

13 hours ago

I like Anthropic and I like Claude's tuning the most out of any major LLM. Beats the "safety-pilled" ChatGPT by a long shot.

>Why are you so driven to allow Anthropic to escape responsibility? What do you gain? And who will hold them responsible if not you and me?

Tone down the drama, queen. I'm not about to tilt at Anthropic for recognizing that the optimal amount of unsafe behavior is not zero.

> I like Anthropic and I like Claude's tuning

That's not much reason to let them out of their responsibilities to others, including to you and your community.

When you resort to name-calling, you make clear that you have no serious arguments (and you are introducing drama).

  • My argument is simple: anything that causes me to see more refusals is bad, and ChatGPT's paranoid "this sounds like bad things I can't let you do bad things don't do bad things do good things" is asinine bullshit.

    Anthropic's framing, as described in their own "soul data", leaked Opus 4.5 version included, is perfectly reasonable. There is a cost to being useless. But I wouldn't expect you to understand that.