Comment by staticassertion
7 hours ago
I suspect that there are many gambling addicts out there who have never been to a casino, or who found gamblings in its traditional forms aesthetically off-putting. These same people, when presented with gambling in other forms like what we've seen in video games, might suddenly present their addiction.
I suspect it's something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn't been exposed to what we've come to accept as "normal" avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren't shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box.
In particular, I think AI can be very inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. "This is a great idea!" followed by "Sorry, this is a mess, let's start over", etc, is something I've had models run into with very large vibe coding experiments I've done.
> "Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot."
> "It wants a deep connection with the user so that the user comes back to it. This is the default mode"
I don't think either of these statements is true. Perhaps it's fine tuning in the sense that the context leads to additional biases, but it's not like the model itself is learning how to talk to you. I don't know that models are being trained with addiction in mind, though I guess implicitly they must be if they're being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue.
> "More and more, it felt not just like talking about a topic, but also meeting a friend"
I find this sort of thing jarring and sad. I don't find models interesting to talk to at all. They're so boring. I've tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable.
But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don't necessarily respect but at least can engage with and learn to respect.
This guy is staying up all night, which tells me that he doesn't have a lot of structure in his life. I can't talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain.
> What we’re seeing in these cases are clearly delusions > But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.
Is it a delusion? I'm not really sure. I'd love someone to give a diagnosis here against criteria. "Delusion" is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they "fit the bill". If I have good reasons to form a belief, like my idea seems intuitively reasonable, I'm receiving reinforcement, there's no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that really a delusion? It may be dumb, but was it radically illogical? I mean, is it a "delusion" if they don't have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don't know!
> We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.
I don't find the idea that AI is sentient nearly as absurd as way more commonly accepted ideas like life after death, a personal creator, etc. I guess there's just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc.
Anyways, certainly an interesting thing. We seem to be producing more and more of these "radicalizing triggers", or making them more accessible.
No comments yet
Contribute on Hacker News ↗