← Back to context

Comment by embedding-shape

6 hours ago

Ok, put another way, do you think we clearly know exactly the pros and cons for individuals to use AI chatbots for mental health treatment?

If you don't clearly know exactly those things, wanting anything else than "AI companies to be held responsible for the impact of their chatbots" as a first step would be utterly foolish and de-humanizing.

Using uncertainty to punish your perceived enemies is called "injustice".

  • No, figuring out harms is called "thinking before acting", and unless you want laws and regulations written in blood (something I personally want to avoid), you need to think before acting.

    Figuring out what harms something in wide use has, is a good thing, and doesn't mean "ban it today", it means "lets figure it out".