Comment by eviks
2 days ago
You're abusing the terms by picking either the overly limited ("death") or overly expansive ("not like") definitions to fit your conclusion. Unless you reject the fact that harm can come from words/images, a parrot can parrot harmful words/images, so be unsafe.
it's like complaining about bad words in the dictionary
the bot has no agency, the bot isn't doing anything, people talk to themselves, augmenting their chain of thought with an automated process. If the automated process is acting in an undesirable manner, the human that started the process can close the tab.
Which part of this is dangerous or harmful?
The maxim “sticks and stones can break my bones, but words can never hurt me” comes to mind here. That said, I think this misses the point that the LLM is not a gatekeeper to any of this.
I find it particularly irritating that the models are so overly puritan that they refuse to translate subtitles because they mention violence.
Don't let your mind potential be limited by such primitive slogans!