← Back to context

Comment by lambda

1 day ago

Psychosis is not necessarily something that you are or aren't; you can be prone to it, without it having manifested, and there can be external things that trigger it.

It isn't hard to imagine that a chatbot that seems to know a lot, and which can easily be convinced to produce text about any arbitrary subject with no grounding in reality, and which is prone to just making up plausible sounding text which is written in an authoritative tone, could be something that could easily trigger such psychotic episodes.

And it doesn't seem improbable that the interactivity of it, the fact that it responds to what is going on in someone's mind, could make it even more prone to triggering certain types of psychosis more easily than traditional unidirectional media like writing, TV, or radio.

Now, that's all supposition. For now, we just have a few anecdotes, not a rigorous study. But I definitely think it is worth looking into whether chatbots are more likely to trigger psychotic episodes, and if there are any safety measures that could be put in place to avoid that.

The non-o-series models from OpenAI and non-Opus (although I have not tried the latest, so it's possible that it too joins them) from Anthropic are cloyingly sycophantic, with every other sentence of yours containing a brilliant and fascinating insight.

It's possible that someone already on the verge of a break or otherwise in a fragile state of mind asking for help with their theories could end up with an LLM telling them how incredibly groundbreaking their insights are, perhaps pushing them quicker, deeper more unmoored in the direction they were already headed.