← Back to context

Comment by jdietrich

7 days ago

For a relatively literate and high-functioning patient, I think that LLMs can deliver good quality psychotherapy that would be within the range of acceptable practice for a trained human. For patients outside of that cohort, there are some significant safety and quality issues.

The obvious example of patients experiencing acute psychosis has been fairly well reported - LLMs aren't trained to identify acutely unwell users and will tend to entertain delusions rather than saying "you need to call an ambulance right now, because you're a danger to yourself and/or other people". I don't think that this issue is insurmountable, but there are some prickly ethical and legal issues with fine-tuning a model to call 911 on behalf of a user.

The much more widespread issue IMO is users with limited literacy, or a weak understanding of what they're trying to achieve through psychotherapy. A general-purpose LLM can provide a very accurate simulacrum of psychotherapeutic best practice, but it needs to be prompted appropriately. If you just start telling ChatGPT about your problems, you're likely to get a sympathetic ear rather than anything that would really resemble psychotherapy.

For the kind of people who use HN, I have few reservations about recommending LLMs as a tool for addressing common mental illnesses. I think most of us are savvy enough to use good prompts, keep the model on track and recognise the shortcomings of a very sophisticated guess-the-next-word machine. LLM-assisted self help is plausibly a better option than most human psychotherapists for relatively high-agency individuals. For a general audience, I'm much more cautious and I'm not at all confident that the risks outweigh the benefits. A number of medtech companies are working on LLM-based psychotherapy tools and I think that many of them will develop products that fly through FDA approval with excellent safety and efficacy data, but ChatGPT is not that product.