Comment by kingstnap
5 hours ago
I like the language of fueling being used here instead of the typical causal thing we see as though using AI means you will go insane.
I would completely agree that if you are already 1x delusional then AI will supercharge that into being 10x delusional real fast.
Granted you could argue access to the internet was already something like a 5x multiplier from baseline anyway with the prevalence of echo chamber communities. But now you can just create your own community with chatbots.
One of the most reliable ways to induce psychosis is prolonged sleep deprivation. And chatbots never tell you to go to bed.
Hm. It shouldn’t be too hard to add something to models to make them do that, right? I guess for that they would need to know the user’s time zone?
Can one typically determine a user’s timezone in JavaScript without getting permissions? I feel like probably yes?
(I’m not imagining something that would strictly cut the user off, just something that would end messages with a suggestion to go to bed, and saying that it will be there in the morning.)
Chatbots already have memory, and mine already knows my schedule and location. It doesn't even need to say anything directly, maybe just shorter replies, less enthusiasm for opening new topics. Letting conversation wind down naturally. I also like the idea of continuing topics in the morning, so if you write down your thoughts/worries, it could say "don't worry about this, we can discuss this next morning".
I know a few people who work 3rd shift. That is people who good reason to be up all night in their local timezone. They all sleep during times when everyone else around them is awake. While this is a small minority, this is enough that your scheme will not work.
1 reply →
It's funny that you frame it that way, because it's the mirror of (IMO) one of their best features. When using one to debug something, you can just stop responding for a bit and it doesn't get impatient like a person might.
I think you're totally right that that's a risk for some people, I just hadn't considered it because I view them in exactly the opposite light.
Claude will routinely tell me to get some sleep and cuddle with my dog. I may mention the time offhandedly or say I'm winding down, but at least it will include conversation stoppers and decrease engagement.
It'll ask if you're eating properly too! It's like a virtual mom! :-P
from my (limited) experience of ChatGPT versus Claude, i get the same. ChatGPT will always add another "prompt" sentence at the end like "Do you want me to X?" while Claude just answers what i ask.
looking at my history recently, Claude's most recent response is literally just "Exactly the right move honestly — that's the whole point."
My understanding of LLMs with attention heads is that they function as a bit of a mirror. The context will shift from the initial conditions to the topic of conversation, and the topic is fed by the human in the loop.
So someone who likes to talk about themselves will get a conversation all about them. Someone talking about an ex is gonna get a whole pile of discussion about their ex.
... and someone depressed or suicidal, who keeps telling the system their own self-opinion, is going to end up with a conversation that reflects that self-opinion back on them as if it's coming from another mind in a conversation. Which is the opposite of what you want to provide for therapy for those conditions.
In a way this kind of reminds me of how in some religions or cultures, they may try to warn you away from using Oujia boards or Tarot, or really anything where you are doing divination. I suppose because in a way, it could lead to an uncharted exploration of heavy topics.
I’m not a heavy user of LLMs and I’m not sure how delusional I could be, but I wonder if a lot of these things could be prevented if people could only send like one or two follow up messages per conversation, and if the LLM’s memory was turned off. But then I suppose this would be really bad for the AI companies’ metrics. Not sure how it would impact healthy users’ productivity either. Any thoughts?
Not just the metrics, the actual utility. For the things the LLMs are good at, the context matters a lot; it's one of the things that makes them more than glorified ELIZA chatbots or simple Markov chains. To give a concrete example: LLMs underpin the code editing tools in things like Copilot. And all that context is key to allow the tool to "reason" through the structure of a codebase.
But they should probably come with a big warning label that says something to the effect of "IF YOU TALK ABOUT YOURSELF, THE NATURE OF THE MACHINE IS THAT IT WILL COME TO AGREE WITH WHAT YOU SAY."
The real question to me here is not the computer. Its why is there such a segment of the population that is so willing to listen to a machine? It it upbringing, societal, circumstance, mental health, genetic?
I know the Milgram obedience to authority experiments but a computer is not really an authority figure.