Comment by Aurornis
7 days ago
Therapists are (or should be, if they’re any good) very good at recognizing when a patient is giving false information, dodging key topics, or trying to manipulate the therapist. Very common for patients to try to hide things from the therapist or even lie, even though that’s counter to the goals of therapy.
LLMs won’t recognize this. They are machines to take input and produce related output that looks correct. It’s not hard to figure out how to change your words and press the retry button until you get the answer you want.
It’s also trivial to close the chat and start a new one if the advice starts feeling like it’s not what you want to hear. Some patients can quit human therapists and get new ones on repeat, but it takes weeks and a lot of effort. With an LLM it’s just a click and a few seconds and that inconvenient therapy note is replaced with a blank slate to try again for the desired answer.
I think this is a valid point. At the same time a user that wants to talk or pour his inside out so an emphatic listener might still benefit from a LLM.
But that's not a therapist, that's a friend, which is still problematic if that friend is too agreeable.