Comment by tyleo
5 hours ago
One thing I feel like I’ve seen in common with these AI psychosis stories is single long-running chat sessions. I’m constantly clearing context and starting from scratch.
Has anyone else noticed this pattern?
5 hours ago
One thing I feel like I’ve seen in common with these AI psychosis stories is single long-running chat sessions. I’m constantly clearing context and starting from scratch.
Has anyone else noticed this pattern?
> constantly clearing context and starting from scratch
And using multiple models. And cycling through roles/persona - brainstorming, discussion, critique, etc. And thinking in agentic terms - all agents, including humans, are badly in need of output review steps.
Yesterday I saw copilot, from scratch, instructed to critique an idea, go straight to enthusiastic "major innovation!" nuttery. If one lived inside that as a bubble, rather than it being a few minute transient, followed by another model listing difficulties... I have no trouble imagining people going nutter.
Context matters with nuttery. You can have someone respected in their subfield, who in a different subfield, where they don't actually work, and thus lack the incentives and relationships and reality checks that would create, be... far less grounded there.
Group-think bubbles as a service - now available in a new low-friction self-adjusting one-person size!
I mean, it's an obvious difference in the primary use cases - if you explicitly want an isolated answer, you might clear context and start from scratch, and if you explicitly want a discussion with a persistent companion (as these people did before any of that psychosis started) then you won't do that.
Yes but I think it's generally how non-technical people use it
That’s even more interesting to me. It would be interesting to see data on that.
Also these people are using the memory features. In technical circles I’ve seen people made fun of for having it enabled. It’s considered “cringe”.
Anyone who lets the word "cringe" affect their thoughts or behavior needs to learn to think for themselves.
I think this is a incremental case of Poe's Law. I use the quotation marks to indicate a degree of tongue-in-cheek humor. But yes there's social pressure against using LLM providers' memory features.