← Back to context

Comment by ACCount37

1 month ago

What harms? I'm sick and tired of the approach to "AI safety" where "safety" stands for "annoy legitimate users with refusals and avoid PR risks".

The only thing worse than that is the Chinese "alignment is when what the AI says is aligned to the party line".

OpenAI has refusals dialed up to max, but they also just ship shit like GPT-4o, which was that one model that made "AI psychosis" a term. Probably the closest we've come to the industry shipping a product that actually just harms users.

Anthropic has fewer refusals, but they are yet to have an actual fuck up on anywhere near that scale. Possibly because they actually know their shit when it comes to tuning LLM behavior. Needless to say, I like Anthropic's "safety" more.