Comment by avaer

5 hours ago

It's the gun control debate in a different outfit.

I don't know if Google is doing _enough_, that can be debated. But if someone is repeatedly ignoring warnings (as the article claims) then maybe we should blame the person performing the act.

Even if we perfectly sanitized every public AI provider, people could just use local AI.

It's absolutely not the gun control debate in a different outfit.

The difference is in how abuse of the given system affects others. This AI affected this person and his actions affected himself. Nothing about the AI enhanced his ability to hurt others. Guns enhance the ability of mentally unstable people to hurt others with ruthless efficiency. That's the real gun debate -- whether they should be so easy to get given how they exponentially increase the potential damage a deranged person can do.

  • Not to mention that guns don't talk to you, simulate empathy, lead you deeper into delusions or try to convince you to take any sort of action.

    That's why I don't buy the "an LLM is just a tool, like a gun or a knife" argument. Tools don't talk back, An LLM as gone beyond being "just a tool"

I think the fact that a guns primary function is harm and murder and AI is a word prediction engine makes a huge difference.