Comment by bigmattystyles
7 days ago
At a reactive level I agree; at a practical level, I disagree. I think a better long term goal would be LLMs approved for therapy - that know when a human is needed. My wife is a therapist, an MFT, and having peeked behind the scenes of both her schooling and the others in her practicum, I was aghast and how amateur and slapshot it all appeared. I'm someone who needs and will need therapy for life - I can have bouts of horrible OCD, that when I'm in it, is just awful - I've found someone good, but she's expensive. My point - if you hold therapists on a pedestal, check out the number of legs on that thing.
LCSW requirements in Louisiana:
And first you have to get the MSW master's degree, and test to become licensed, and have professional insurance.
That 96 hours "BACS" is ~$100 per hour, max 1 hour per week, and has to be completed in <4 years.
The "slapshot" education is because all this is "soft science" - the real education is in on-the-job training. hence requiring over 5000 hours of clinical experience.
I also have stuff to say about other comments, but suffice to say "neither your experience, nor your cohort's experience, are universal".
Feeding patient files and clinical notes into a training set violates so many ethical and legal rules; but barring that, you gotta train those 8000+ hours worth, for each cohort, in each geographical or political region. What works in Louisiana may not be as effective in DTLA or SEATAC. What works in emergency mental health situations won't work on someone burnt out from work, or experiencing compassion fatigue. Being a mental health professional in a hospital environment is different than in a clinical environment is different than in private practice. That's what the 8000 hours of training trains, for the environment it will be used in.
Sure, someone could pull a meta and just violate ethics and do it anyhow, but what will they charge to use it? How will the "LLM" keep track of 45 minutes worth of notes per session? Do you have any idea how much writing is involved? treatment plans, session notes, treatment team notes, nevermind the other overheads.
LLM can barely function as an "artist" of any sort, but we want to shoehorn in mental health? c'mon.
> Feeding patient files and clinical notes into a training set violates so many ethical and legal rules;
> How will the "LLM" keep track of 45 minutes worth of notes per session? Do you have any idea how much writing is involved? treatment plans, session notes, treatment team notes, nevermind the other overheads.
It sounds like you're asking this as a hypothetical, when in fact this has been a reality for well over a year (while following all the legal requirements). From another comment of mine:
"BTW, putting mental health aside, many doctors today are using LLM tools to record the whole conversation with the patient and provide good summaries, etc. My doctor loves it - before he was required to listen to me and take notes at the same time. Now he feels he can focus on listening to me. He said the LLM does screw up, but he exists to fix those mistakes (and can always listen to the audio to be sure).
I don't know which company is providing the LLM in the backend - likely a common cloud provider (Azure, Google, etc). But again - they are fully HIPPA certified. It's been in the medical space for well over a year."
and each session is important to future sessions. All you've shown is that it is possible to do speech to text. And the claim is that it will save the audio ... for how long? forever? what a privacy nightmare.
I've literally done a POC that you can do therapeutic LLM, where the user journals once a day, a few sentences. After a couple of months, the context grows to the point that the LLM starts screwing up when reading the entire context. It hallucinates things that happened that didn't, it starts changing it's "feelings" on past events in a way that doesn't make therapeutic sense.
there's no way around this, currently.
so re-read what i wrote, because i said everything i meant to, and needed to say.
I agree with almost everything you said - especially, about LLMs not being nearly ready yet. I didn't phrase that very well. The practicum and supervision, did seem very intense and thorough and I will admit that since that involved actual clients, what my wife could/should/did share about it was nearly nil so my visibility into it was just as nil.
The part I disagree with is:
>> Feeding patient files and clinical notes into a training set violates so many ethical and legal rules
I know it's unrealistic but I wonder if completely anonymized records would help or if that would remove so much context as to be useless. I guess I would allow for my anonymized enough medical records to be available for training 100 years after my death, though I get that even that is a timebomb with genetics.
And yes, obviously my comment was a personal anecdote.
i am still under the learning that it is impossible to anonymize data to the extent necessary for law enforcement and the judiciary to not be able to pierce. Furthermore, the implication here is that we'd use old notes of people long since dead to train the LLMs necessary to do this work.
Since this is a soft science, trends, names, and diagnostics have radically changed. To paraphrase Carlin: "Used to be 'shell shocked', then came 'battle fatigue', then came 'post-traumatic stress disorder'; we went from 2 syllables that accurately described it to eight syllables and we've added a dash and completely removed the humanity that may have helped these people get the attention and care they deserved."
I don't comment that flippantly.
And i wasn't really speaking to you, i didn't want to top comment, and i didn't want to read past your top comment to find a more appropriate place because i was already getting eye-twitches from all of the hot takes above your comment.
yes, I definitely agree here. We've known for a long while that 1:1 therapy isn't the only way to treat depression, even if we aim to use psychotherapy methods like CBT/DBT.
David Burns released his CBT guide "Feeling Good" in 1980, which he labels as a new genre of "Bibliotherapy". His book is shown to have clinically significant effects on depression remission. Why can an LLM not provide a more focused and interactive version of this very book?
Now, I agree with you and the article's argument that one cannot simply throw a gpt-4o at a patient and expect results. The LLM must be trained to both be empathetic and push back against the user when necessary, forcing the user to build nuance in their mental narrative and face their cognitive distortions.
1:1 therapy isn't the only way to treat depression, but it's still unmatched for personality disorders, and can be a huge force multiplier with medication for OCD, GAD, MDD, Schizophrenia, ADHD, and, yes, depression.
The problem is that because therapy is as much art as science, the subset of skilled, intelligent therapists is much smaller than the set of all therapists, and the subset of skilled, intelligent therapists with experience and knowledge of your particular disorder and a modality that's most effective for you is tiny, making it frustratingly hard to find a good match.