Comment by GrinningFool
2 hours ago
I'm really not suggesting a ban, there's no way that would fly.
I'm suggesting restraint and responsibility on the part of the organization pushing this. When do we learn that being reactive after the harm is done isn't actually a required method of doing business? That it's okay to slow down even if there's a short-term opportunity cost?
This applies just as much to the push for LLMs everywhere as it does OpenAI's specific intention to support sexbots.
But it's all the same pattern. Push for as much as we can, as fast as we can, at as broad a scale as we can -- and deal with the consequences only when we can't ignore them anymore. (And if we can keep that to a bare minimum, that would be best for the bottom line.)
No comments yet
Contribute on Hacker News ↗