← Back to context

Comment by johnnyanmac

3 months ago

>Using LLMs doesn't kill people

Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.

It's way past "some exceptions" at this point.

Besides the suicide one, I don't know of any examples where that has actually killed someone. Someone could search on Google just the same and ignore their symptoms.

  • >I don't know of any examples where that has actually killed someone.

    You don't see how botched law case can't cost someone their life? Let's not wait until more die to reign this in.

    >Someone could search on Google just the same and ignore their symptoms.

    Yes, and it's not uncommon for websites or search engines to be sued. Millenia of laws exist for this exact purpose, so companies can't deflect bad things back to the people.

    If you want the benefits, you accept the consequences. Especially when you fail to put up guard rails.

LLMs generate text. It is people who decide what to do with it.

Removing all personal responsibility from this equation isn't going to solve anything.

  • >It is people who decide what to do with it.

    That argument is rather naive, given that millenia of law is meant to regulate and disincentivize behavior. "If people didn't get mad they wouldn't murder!"

    We've regulated public messages for decades, and for good reason. I'm not absolving them this time because they want to hide behind a chatbot. They have blood on their hands.