← Back to context

Comment by motbus3

7 days ago

Anyone who recommends LLM to replace a doctor or a therapist or any health profession is utterly ignorant or has interest in profiting from it.

One can easily make LLM say anything due to the nature of how it works. An LLM can and will offer eventual suicide options for depressed people. At the best case, it is like recommending a sick person to read a book.

I can see how recommending the right books to someone who's struggling might actually help, so in that sense it's not entirely useless or could even help the person get better. But more importantly I don't think most people are suggesting LLMs replace therapists; rather, they're acknowledging that a lot of people simply don't have access to mental healthcare, and LLMs are sometimes the only thing available.

Personally, I'd love to see LLMs become as useful to therapists as they've been for me as a software engineer, boosting productivity, not replacing the human. Therapist-in-the-loop AI might be a practical way to expand access to care while potentially increasing the quality as well (not all therapists are good).

  • That is the by product of this tech bubble called hacker news, programmers that think that real world problems can be solved by an algorithm that's been useful to them. Haven't you thought about that it might be useful just to you and nothing more? It's the same pattern again and again, first with blockchain and crypto, then nfts, today ai, tomorrow whatever will come. I'd also argue it's useful in real software engineering, except for some tedious/repetitive tasks. Think about it: how nn LLM that by default create a react app for a simple form can be the right thing to use for a therapist? As well as it comes with his own biases on React apps what biases would come with for a therapy?

    • I feel like this argument is a byproduct of being relatively well-off in a Western country (apologies if I'm wrong), where access to therapists and mental healthcare is a given rather than a luxury (and even that is arguable).

      > programmers that think that real world problems can be solved by an algorithm that's been useful to them.

      Are you suggesting programmers aren't solving real-world problems? That's a strange take, considering nearly every service, tool, or system you rely on today is built and maintained by software engineers to some extent. I'm not sure what point you're making or how it challenges what I actually said.

      > Haven't you thought about that it might be useful just to you and nothing more? It's the same pattern again and again, first with blockchain and crypto, then nfts, today ai, tomorrow whatever will come.

      Haven't you considered how crypto, despite the hype, has played a real and practical role in countries where fiat currencies have collapsed to the point people resort to in-game currencies as a substitute? (https://archive.ph/MCoOP) Just because a technology gets co-opted by hype or bad actors doesn't mean it has no valid use cases.

      > Think about it: how nn LLM that by default create a react app for a simple form can be the right thing to use for a therapist?

      LLMs are far more capable than you're giving them credit for in that statement, and that example isn't even close to what I was suggesting.

      If your takeaway from my original comment was that I want to replace therapists with a code-generating chatbot, then you either didn't read it carefully or willfully misinterpreted it. The point was about accessibility in parts of the world where human therapists are inaccessible, costly, or simply don't exist in meaningful numbers, AI-assisted tools (with a human in the loop wherever possible) may help close the gap. That doesn't require perfection or replacement, just being better than nothing, which is what many people currently have.

      1 reply →

  • > But more importantly I don't think most people are suggesting LLMs replace therapists; rather, they're acknowledging that a lot of people simply don't have access to mental healthcare, and LLMs are sometimes the only thing available.

    My observation is exactly the opposite. Most people who say that are in fact suggesting that LLM replace therapists (or teachers or whatever). And they mean it exactly like that.

    They are not acknowledging hard availability of mental healthcare, they do not know much about that. They do not even know what therapies do or dont do, people who suggest this are frequently those whose idea of therapy comes from movies and reddit discussions.

> Anyone who recommends LLM to replace a doctor or a therapist or any health profession is utterly ignorant or has interest in profiting from it.

I disagree. There are places in the world where doctors are an extremely scarce resource. A tablet with a LLM layer and webmd could do orders of magnitude more good than bad. Not doing anything, not having access to medical advice, not using this already kills many many people. Having the ability to ask in your own language, in natural language, and get a "mostly correct" answer can literally save lives.

LLM + "docs" + the patient's "common sense" (i.e. no glue on pizza) >> not having access to a doctor, following the advice of the local quack, and so on.

  • The problem is that is not what they will do. They will have less doctors where they exist now and real doctors will become even more expensive making it accessible only for the richest of the riches. I agree that having it as an alternative would be good, but I don't think that's what's going to happen

    • Eh, I'm more interested in talking and thinking about the tech stack, not how a hypothetical evil "they" will use it (which is irrelevant to the tech discussed, tbh) . There are arguments for this tech to be useful, without coming from "naive" people or from people wanting to sell something, and that's why I replied to the original post.

      1 reply →

> An LLM can and will offer eventual suicide options for depressed people.

"An LLM" can be made to do whatever, but from what I've seen, modern versions of ChatGPT/Gemini/Claude have very strong safeguards around that. It will still likely give people inappropriate advice, but not that inappropriate.