← Back to context

Comment by iambateman

8 hours ago

If we have licensed therapists, we should have licensed AI agents giving therapeutic advice like this.

For right now, there AI’s are not licensed, and this should be just as illegal as it would be if I set up a shop and offered therapy to whoever came by.

Some AI problems are genuinely hard…this one is not.

If you advertise your model as a therapist you should be requried to get a license, I agree. But ChatGPT doesn't advertise itself like that. It's more you going to a librarian and telling them about your issues, and the librarian giving advice. That's not illegal, and the librarian doesn't need a license for that. Over time you might even come to call the librarian a friend, and they would be a pretty bad friend if they didn't give therapeutic advice when they deem it necessary

Of course treating AI as your friend is a terrible idea in the first place, but I doubt we can outlaw that. We could try to force AIs to never give out any life advice at all, but that sounds very hard to get right and would restrict a lot of harmless activity

  • We can absolutely require that AI's not give advice that encourages self-harm or the people involved will go to jail.

    Restricting harmless activity is an acceptable outcome of trying our best to prevent vulnerable people in society from hurting themselves and others.

  • > But ChatGPT doesn't advertise itself like that.

    One of the big problems is how OpenAI is presenting itself to the general public. They don't advertise ChatGPT as a licensed therapist, but their messaging about potential issues looks a lot like the small print on cigarette cartons years ago. They don't want to put out any messaging that would meaningfully diminish the awe people have around these tools.

    Most non-technical people I interact with have no understanding of how ChatGPT and tools like it work. They have no idea how skeptical to be of anything that comes out of it. They accept what it says much more readily than is healthy, and OpenAI doesn't really want to disturb that approach.

How do you feel about the chat logs here?

I have to wonder: would the suicide have been prevented if chatGPT didn't exist?

Because if that's not at least a "maybe", I feel like chatGPT did provide comfort in a dire situation here.

Probably we have no way not at least saying "maybe", but I can imagine just as well, that chatGPT did not accelerate anything.

I wished we could see a fuller transcript.

  • > Because if that's not at least a "maybe", I feel like chatGPT did provide comfort in a dire situation here.

    That's a pretty concerning take. You can provide comfort to someone who is despondent, and you can do it in a way that doesn't steer them closer to ending their life. That takes training though, and it's not something these models are anywhere close to being able to handle.

    • I'm in no way saying proper help wouldn't be better.

      Maybe in the end ChatGPT would be a great tool to actually escalate on detecting a risk (instead of an untrue and harmful text snippet and a phone number).

  • It's the wrong question. If an unlicensed therapist verifiably encourages someone to kill themselves...we don't entertain the counterfactual and wonder if the person was bound to do it anyway.

    Instead, we put the unlicensed therapist in jail.

    • What about a friend trying to support someone in dark times?

      I'd call the cops on them* at some point to stop them from harming themselves and I'd never say what ChatGPT said here, but I'd still talk to them trying to help, even without being a therapist. I can recommend a therapist, but it's hard to reach people in that state. You got to make use of the trust they gave you.

      * non US country

  • > I have to wonder: would the suicide have been prevented if chatGPT didn't exist?

    I'd say yes, because the signs would have to surface somewhere else, probably in an interaction with a human, who (un)consciously saved him with a simple gesture.

    With a simple discussion, an alternative perspective on a problem, or a sidekick who can support someone for a day or two, many lives can and do change.

    We're generally not aware though.