← Back to context

Comment by lr4444lr

3 months ago

[flagged]

They're going to listen to both if given the opportunity. I'm sure most chatbots will say "go take your meds" the majority of the time - but it only takes one chat playing along to send someone unstable completely off the rails, especially if they accept the standard, friendly-and-reliable-coded "our LLM is here to help!" marketing.

It'd be great if it were trained on therapeutic resources, but otherwise just ends up enabling and amplifying the problem.

I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.

I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.

  • > I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.

    There's that danger from the internet, as well as the danger of being exposed to conmen that are okay with exploiting mental illness for profit. Watched this happen to an old friend with schizophrenia.

    There are online communities that are happy to affirm delusions and manipulate sick people for some easy cash. LLMs will only make their fraud schemes more efficient, as well.

I think the last think a delusional person needs is external validation of his delusions, be it from a human or a sycophantic machine.