Comment by cosmicgadget
19 hours ago
> This is a story about OpenAI's failure to implement basic safety measures for vulnerable users.
The author seems to be suggesting invasive chat monitoring as a basic safety measure. Certainly we can make use of the usual access control methods for vulnerable individuals?
> Consider what anthropomorphic framing does to product liability. When a car's brakes fail, we don't write headlines saying “Toyota Camry apologizes for crash.”
It doesn't change liability at all?
> When a car's brakes fail, we don't write headlines saying “Toyota Camry apologizes for crash.”
No, but we do write articles saying "A man is dead after a car swerved off the road and struck him on Thursday" as though it was a freak accident of nature, devoid of blame or consequence.
Besides which, if the Camry had ChatGPT built in then we 100% would see articles about the Camry apologizing and promising not to do it again as if that meant literally anything.
> The author seems to be suggesting invasive chat monitoring as a basic safety measure
I suggest that robots talk like robots and do not imitate humans. Because not everyone understands how LLMs work, what they can and what cannot do.
They've always had a component that warns you about violating their ToS and sometimes prevents you from continuing a conversation in non-ToS approved directions.
I wouldn't call that a basic measure. Perhaps it can be easily extended to identify vulnerable people and protect them.
The author is not suggesting that. You are putting words in her writing.