Comment by like_any_other
19 hours ago
> This is a story about OpenAI's failure to implement basic safety measures for vulnerable users.
I'm trying to imagine what kind of safety measures would have stopped this, and nothing short of human supervisors monitoring all chats comes to mind. I wouldn't call that "basic". I guess that's why the author didn't describe these simple and affordable "basic" safety measures.
I also wonder why we do not expect radio towers, television channels, book publishers etc to make sure that their content will not be consumed by the most vulnerable population. It's almost as if we do not expect companies to baby-proof everything at all times.
Social media companies get bad press for hosting harmful content pretty often, eg
https://www.cnn.com/2021/10/04/tech/instagram-facebook-eatin...
Grok calling itself Nazi and producing racist imagery is not baby-proofing.
> I wouldn't call that "basic".
"Basic" is relative. Nothing about LLMs is basic; it's all insanely complex, but in the context of a list of requirements "Don't tell people with signs of mental illness that they're definitely not mentally ill" is kind of basic.
> I'm trying to imagine what kind of safety measures would have stopped this, and nothing short of human supervisors monitoring all chats comes to mind.
Maybe this is a problem they should have considered before releasing this to the world and announcing it as the biggest technological revolution in history. Or rather I'm sure they did consider it, but they should have actually cared rather than shrugging it off in pursuit of billions of dollars and a lifetime of fame and fortune.