← Back to context

Comment by pyuser583

1 day ago

The problem is “safety” prevents users from using LLMs to meet their requirements.

We typically don’t critique the requirements of users, at least not in functionality.

The marketing angle is that this measure is needed because LLMs are “so powerful it would be unethical not to!”

AI marketers are continually emphasizing how powerful their software is. “Safety” reinforces this.

“Safety” also brings up many of the debates “mis/disinformation” brings up. Misinformation concerns consistently overestimate the power of social media.

I’d feel much better if “safety” focused on preventing unexpected behavior, rather than evaluating the motives of users.