← Back to context

Comment by Argonaut998

8 hours ago

I don't know what steps they can take. I suppose the best course of action is to deactivate the account if the LLM deems the user mentally unwell. Although that is just additional guardrails that could hurt the quality of the LLM.

In any serious engineering operation, a failure like this is time to shut down everything and redesign until the same failure cannot happen. We all read Feynman's essay on Challenger right? But these companies want credit when their products work as advertised, but push the blame on users when they emit plausible lies or demonic advice. Taken too far that leads the police walking into HQ, arresting the board of directors, and selling the company for scrap. Just as often that leads to strict regulation so you can't be a cowboy coder or turn any loft into a sweatshop any more.

I would absolutely not consider this overreaching if the statement within this thread that "it had referred the user to mental help hotlines multiple times in the past" is true.

That reaches near the fact that a lot of AI is not ready for the enterprise especially when interconnected with other AI agents since it lacks identity and privileged access management.

Perhaps one could establish the laws of "being able to use AI for what it is", for instance, within the boundary of the general public's web interface, not limiting the instances where it successfully advertises itself as "being unable to provide medical advice" or "is prone to or can make mistake", and such, to validating that the person understands by asking them directly and perhaps somewhat obviously indirectly and judging if they're aware that this is a computer you're talking to.

At some point they have to say "if we can't make this safe we can't do it at all". LLMs are great for some things, but if they will do this type of thing even once then they are not worth the gains and should be shutdown.

  • No they don't, if we're going to start saying that we can't use any technology. If someone is mentally ill to the point where they are on the verge of suicide nothing is safe.

    If they're going to curtail LLMs there'd need to be some actual evidence and even then it would be hard to justify winding them back given the incredible upsides LLMs offer. It'd probably end up like cars where there is a certain number of deaths that just need to be tolerated.

    • > If someone is mentally ill to the point where they are on the verge of suicide nothing is safe.

      This is a perspective born only from ignorance. Life can wear down anyone, even the strong. I find there may come a time in anyone's life where they are on the edge, staring into an abyss.

      At the same time - and this is important - suicidality can pass with time and depression can be treated. Being suicidal is not a death sentence and it just isn't true that "nothing is safe". The important thing is making sure there's no bot "helpfully" waiting to push someone over the cliff or confirm their worst illusions at the worst possible time.

    • Can you imagine what driving cars would look like if they would be only (self-)regulated by VC-backed startups like we see so far with this new technology? Would there be seatbelts, speedbumps, brake signals, licenses or speed limits?

      This obviously isn't a binary question. Sure we cars have benefits but we don't let anyone ducktape a V8 to a lawnmower, paint flames over it and sell it to kids promising godlike capabilities without annoying "safety features".

      Economic benefits can not justify the deaths of people, especially as this technology so far only benefits a handful of people economically. I would like to see the evidence (of benefits to the greater society that I see being harmed now) before we unleash this thing freely and not the other way around.

      3 replies →

    • Your car analogy only proves the opposite. We don't "tolerate" road deaths because they are a fundamental law of physics. We only tolerate them because we've spent a century under-investing in safer alternatives like robust public transit and walkable infrastructure, people have given up.

      Claiming we have to accept a death quota for LLMs just assumes that the current path of the technology is the only path possible. If a tech comes with systemic risk, the answer isn't to just shrug our shoulders and go "oh well, some people may die but it's worth it to use this tech." The answer is to demand a different architecture and better guardrails and oversight before it gets scaled to the entire public.

      Cars are also subject to strict regulations for crash testing, we have seatbelt laws, speed limits, and skill/testing based licensing. All of these regulations were fought against by the auto industry at the time. Want to treat LLMs like cars? Cool, they are now no longer allowed to be released to the public until they've passed standardized safety tests and people have to be licensed to use them.

    • If cars were invented today they probably wouldn't be allowed. They get a pass because they existed before and so we ignore the harm they do

      3 replies →

    • Please tell me the upsides.

      I’ve been canvassing all and sundry for information on seen productivity gains, and I’ve got answers from 2x, to 30%, to 15% to “will make no difference to my life if its gone tomorrow”

      When I test it for high reliability workflows, it’s never provided the kind of consistency I would expect from an assembly line. I can’t even build out quality control systems to ensure high reliability for these things.

      Survey and studies on AI productivity mixed results at best.

      So I would love to know actual, empirical or even self reported productivity gains people are seeing.

      And there is no such thing as a free lunch. In FAR too many ways, this is like the days of environmental devastation caused by industrial pollution. The benefits are being felt by a few, profits to fewer, while a forest fire in our information commons is excoriating the many.

      Scams and fraud are harder to distinguish, while spam and AI slop abounds. Social media spaces are being overrun, and we are moving from forums and black lists to discords, verification and white lists.

      Visits to media sites are being killed because Google is offering AI summaries at the top, killing traffic, donations and ad revenue.

      Nations are tripping over themselves to ingratiate themselves with the top tech firms, to attract investment, since AI is now the only game in town.

      I speak for many when I say I have zero interest in 30% or even 2x personal productivity gains at the low cost of another century of destruction and informational climate change.

  • Suppose they made things worse once and made things better twice?

    "Even once" is not a way to think about anything, ever.

  • Bridges tend to be highly associated with suicides. Should we ban bridges too?

    • Reductio ad absurdum.

      We don't ban bridges, but we do install suicide barriers, emergency phones, nets on the bridges. We practice safety engineering. A bunch of suicides on a bridge is a design flaw of that bridge, and civil engineers get held accountable to fix it.

      Plus, a bridge doesn't talk to you. It doesn't use persuasive language, simulate empathy, or provide step-by-step instructions for how to jump off it to someone in crisis.