Comment by cmenge
7 days ago
I see your point. Let me clarify what I'm trying to say:
- I consider LLMs a pro user tool, requiring some finesse / experience to get useful outputs
- Using an LLM _directly_ for something very high-relevance (legal, taxes, health) is a very risky move unless you are a highly experienced pro user
- There might be a risk in people carelessly using LLMs for these purposes and I agree. But it's no different than bad self-help books incorrect legal advice you found on the net or read in a book or in a newspaper
But the article is trying to be scientific and show that LLMs aren't useful for therapy and they claim to have a particularly useful prompt for that. I strongly disagree with that, they use a substandard LLM with a very low quality prompt that isn't nearly set up for the task.
I built a similar application where I use an orchestrator and a responder. You normally want the orchestrator to flag anything self-harm. You can (and probably should) also use the built-in safety checkers of e.g. Gemini.
It's very difficult to get a therapy solution right, yes, but I feel people just throwing random stuff into an LLM without even the absolute basics of prompt engineering aren't trying to be scientific, they are prejudiced and they're also not considering what the alternatives are (in many cases, none).
To be clear, I'm not saying that any LLM can currently compete with a professional therapist but I am criticizing the lackluster attempt.
No comments yet
Contribute on Hacker News ↗