Comment by wongarsu

9 hours ago

If you advertise your model as a therapist you should be requried to get a license, I agree. But ChatGPT doesn't advertise itself like that. It's more you going to a librarian and telling them about your issues, and the librarian giving advice. That's not illegal, and the librarian doesn't need a license for that. Over time you might even come to call the librarian a friend, and they would be a pretty bad friend if they didn't give therapeutic advice when they deem it necessary

Of course treating AI as your friend is a terrible idea in the first place, but I doubt we can outlaw that. We could try to force AIs to never give out any life advice at all, but that sounds very hard to get right and would restrict a lot of harmless activity

> But ChatGPT doesn't advertise itself like that.

One of the big problems is how OpenAI is presenting itself to the general public. They don't advertise ChatGPT as a licensed therapist, but their messaging about potential issues looks a lot like the small print on cigarette cartons years ago. They don't want to put out any messaging that would meaningfully diminish the awe people have around these tools.

Most non-technical people I interact with have no understanding of how ChatGPT and tools like it work. They have no idea how skeptical to be of anything that comes out of it. They accept what it says much more readily than is healthy, and OpenAI doesn't really want to disturb that approach.

We can absolutely require that AI's not give advice that encourages self-harm or the people involved will go to jail.

Restricting harmless activity is an acceptable outcome of trying our best to prevent vulnerable people in society from hurting themselves and others.