Comment by datsci_est_2015
1 day ago
It seems like outcomes are probably K-shaped: those who are capable of critical thinking and deciding what type of information should be confirmed by a healthcare professional and what type of information is relatively riskless to consume from ChatGPT should have positive outcomes.
Those who are prone to disinformation and misinterpretation may experience some very negative health outcomes.
I agree with that. The question I suppose is whether an LLM can detect, perhaps by the question itself, if they are dealing with someone (I hate to say it) "stable".
Anyone asking how to commit suicide, as a recent example, should be an obvious red flag. We can get more nuanced from there.
> The question I suppose is whether an LLM can detect, perhaps by the question itself, if they are dealing with someone (I hate to say it) "stable".
GPT-5 made a major advance on mental health guardrails in sensitive conversations.
https://www.theverge.com/news/718407/openai-chatgpt-mental-h...
https://openai.com/index/strengthening-chatgpt-responses-in-...