Comment by quitit
2 months ago
There are plenty of reasons why having a chatbot partner is a bad idea (especially for young people), but here's just a few:
- The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions. Real relationships have friction, from this we develop important interpersonal skills such as setting boundaries, settling disagreements, building compromise, standing up for oneself, understanding one another, and so on. These also have an effect on one's personal identity and self-value.
- Real relationships have the input from each participant, whereas chatbots are responding to the user's contribution only. The chatbot doesn't have its own life experiences and happenings to bring to the relationship, nor does it instigate autonomously, it's always some kind of structured reply to the user.
- The implication of being fully satisfied by a chatbot is that the person is seeking a partner who does not contribute to the relationship, but rather just an entity that only acts in response to them. It can also be an indication of some kind of problem that the individual needs to work through with why they don't want to seek genuine human connection.
That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.
People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though. It's why you see people shopping around until they find a therapist who will tell them what they want to hear, or why you see people opt to raise dogs instead of kids.
You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.
And the prompt / context is going to leak into its output and affect what it says, whether you want it to or not, because that's just how LLMs work, so it never really has its own opinions about anything at all.
> But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.
This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.
To state things in a different way - it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses).
11 replies →
Hmm. I think you may be confusing sycophancy with simply following directions.
Sycophancy is a behavior. Your complaint seems more about social dynamics and whether LLMs have some kind of internal world.
1 reply →
>> That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.
> You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.
Also: if someone makes it "challenging" it's only going to be "challenging" with the scare quotes, it's not actually going to be challenging. Would anyone deliberately, consciously program in a real challenge and put up with all the negative feelings a real challenge would cause and invest that kind of mental energy for a chatbot?
It's like stepping on a thorn. Sometimes you step on one and you've got to deal with the pain, but no sane person is going to go out stepping on thorns deliberately because of that.
> and it's not too difficult to make an opinionated and challenging chatbot
Funnily enough, I've saved instructions for ChatGPT to always challenge my opinions with at least 2 opposing views; and never to agree with me if it seems that I'm wrong. I've also saved instructions for it to cut down on pleasantries and compliments.
Works quite well. I still have to slap it around for being too supportive / agreeing from time to time - but in general it's good at digging up opposing views and telling me when I'm wrong.
>People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though.
I don't disagree that some people take AI way too far, but overall, I don't see this as a significant issue. Why must relationships and human interaction be shoved down everyone's throats? People tend to impose their views on what is "right" onto others, whether it concerns religion, politics, appearance, opinions, having children, etc. In the end, it just doesn't matter - choose AI, cats, dogs, family, solitude, life, death, fit in, isolate - it's just a temporary experience. Ultimately, you will die and turn to dust like around 100 billion nameless others.
I lean toward the opinion there are certain things people (especially young people) should be steered away from because they tend to snowball in ways people may not anticipate, like drug abuse and suicide; situations where they wind up much more miserable than they realize, not understanding the various crutches they've adopted to hide from pain/anxiety have kept them from happiness (this is simplistic, though; many introverts are happy and fine).
I don't think I have a clear-enough vision on how AI will evolve to say we should do something about it, though, and few jurisdictions do anything about minors on social media, which we do have a big pile of data on, so I'm not sure it's worth thinking/talking about AI too much yet, at least as it relates to regulating for minors. Unlike social media, too, the general trajectory for AI is hazy. In the meantime, I won't be swayed much by anecdotes in the news.
Regardless, if I were hosting an LLM, I would certainly be cutting off service to any edgy/sexy/philosophy/religious services to minimize risk and culpability. I was reading a few weeks ago on Axios of actual churches offering chatbots. Some were actually neat; I hit up an Episcopalian one to figure out what their deal was and now know just enough to think of them as different-Lutherans. Then there are some where the chatbot is prompted to be Jesus or even Satan. Which, again, could actually be fine and healthy, but if I'm OpenAI or whoever, you could not pay me enough.
[dead]
> chatbots are responding to the user's contribution only
Which is also why I feel the label "LLM Psychosis" has some merit to it, despite sounding scary.
Much like auditory hallucinations where voices are conveying ideas that seem-external-but-aren't... you can get actual text/sound conveying ideas that seem-external-but-aren't.
Oh, sure, even a real human can repeat ideas back at you in a conversation, but there's still some minimal level of vetting or filtering or rephrasing by another human mind.
> even a real human can repeat ideas back at you in a conversation, but there's still some minimal level of vetting or filtering or rephrasing by another human mind.
The mental corruption due to surrounding oneself with sycophantic yes men is historically well documented.
Excellent point. It’s bad for humans when humans do it! Imagine the perfect sycophant, never tires or dies, never slips, never pulls a bad facial expression, can immediately swerve their thoughts to match yours with no hiccups.
It was a danger for tyrants and it’s now a danger for the lonely.
2 replies →
I wonder if in the future that'll ever be a formal medical condition: Sycophancy poisoning, with chronic exposure leading to a syndrome of some sort...
That explains why Elon Musk is such an AI booster. The experience of using an LLM is not so different from his normal life.
are you sure they are internal? https://news.ycombinator.com/item?id=45957619
> The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions.
To be honest, the alternative for a good chunk of these users is no interaction at all, and that sort of isolation doesn't prepare you for human interaction either.
> To be honest, the alternative for a good chunk of these users is no interaction at all, and that sort of isolation doesn't prepare you for human interaction either.
This sounds like an argument in favor of safe injection sites for heroin users.
Hey hey safe injecting rooms have real harm minimisation impacts. Not convinced you can say the same for chatbot boyfriends.
That's exactly right, and that's fine. Our society is unwilling to take the steps necessary to end the root cause of drug abuse epidemics (privatization of healthcare industry, lack of social safety net, war on drugs), so localities have to do harm reduction in immediately actionable ways.
So too is our society unable to do what's necessary to reduce the startling alienation happening (halt suburban hyperspread, reduce working hours to give more leisure time, give workers ownership of the means of production so as to eliminate alienation from labor), so, ai girlfriends and boyfriends for the lonely NEETs. Bonus, maybe it'll reduce school shootings.
5 replies →
Given that those tend to have positive effects for the societies that practice this is that what you wanted to say?
Wouldn't they be seeking a romantic relationship otherwise?
Using AI to fulfill a need implies a need which usually results in action towards that need. Even "the dating scene is terrible" is human interaction.
> Even "the dating scene is terrible" is human interaction.
For some subset of people, this isn't true. Some people don't end up going on a single date or get a single match. And even for those who get a non-zero number there, that number might still be hovering around 1-2 matches a year and no actual dates.
2 replies →
Swiping on thousands of people without getting a single date is not human interaction and that's the reality for some people.
I still don't think an AI partner is a good solution, but you are seriously underestimating how bad the status quo is.
15 replies →
We do see - from 'crazy cat lady' to 'incel', from 'where have all the good men gone' to the rapid decline of the numbers of 25-year-olds who have had sexual experiences, not to mention from the 'loneliness epidemic' that has several governments, especially in Europe, alarmed enough to make it an agenda pointt: No, they would not. Not all of them. Not even a majority.
AI in these cases is just a better 'litter of 50 cats', a better, less-destructive, less-suffering-creating fantasy.
Not all human interaction is a net positive in the end.
In this framing “any” human interaction is good interaction.
This is true if the alternative to “any interaction” is “no interaction”. Bots alter this, and provide “good interaction”.
In this light, the case for relationship bots is quite strong.
Why would that be the alternative?
These are only problems if you assume the person later wants to come back to having human relationships. If you assume AI relationships are the new normal and the future looks kinda like The Matrix, with each person having their own constructed version of reality while their life-force is bled dry by some superintelligent machine, then it is all working as designed.
Human relationships are part of most families, most work, etc. Could get tedious constantly dealing with people who lack any resilience or understanding of other perspectives.
The point is you wouldn't deal with people. Every interaction becomes a transaction mediated by an AI that's designed to make you happy. You would never genuinely come in contact with other perspectives; everything would be filtered and altered to fit your preconceptions.
It's like all those dystopias where you live in a simulation but your real body is wasting away in a vat or pod or cryochamber.
Someone has to make the babies!
don't worry, "how is babby formed" is surely in every llm training set
1 reply →
It could be the case that society is responding to overpopulation in many strange ways that serve to reduce/reverse the growth of a stressed population.
Perhaps not making as many babies is the longterm solution.
Wait, how did this work in The Matrix exactly?
2 replies →
Decanting jars, a la Brave New World!
[dead]
ugh. speak of the devil and he shall appear.
I don’t know. This reminds me of how people talked about violent video games 15 years back. Do FPS games desensitize and predispose gamers to violence, or are they an outlet?
I think for essentially all gamers, games are games and the real world is the real world. Behavior in one realm doesn’t just inherently transfer to the other.
Unless someone is harming themselves or others, who are we to judge?
We don't know that this is harmful. Those participating in it seem happier.
If we learn in the course of time (a decade?) that this degrades lives with some probability, we can begin to caution or intervene. But how in God's name would we even know that now?
I would posit this likey has measurable good outcomes right now. These people self-report as happier. Why don't we trust them? What signs are they showing otherwise?
People were crying about dialup internet being bad for kids when it provided a social and intellectual outlet for me. It seems to be a pattern as old as time for people to be skeptical about new ways for people to spend their time. Especially if it is deemed "antisocial" or against "norms".
There is obviously a big negative externality with things like social media or certain forms of pay-to-play gaming, where there are strong financial interests to create habits and get people angry or willing to open their wallets. But I don't see that here, at least not yet. If the companies start saying, "subscribe or your boyfriend dies", then we have cause for alarm. A lot of these bots seem to be open source, which is actually pretty intriguing.
It seems we're not quite there, yes. But you should have seen the despair when GPT 5 was rolled out to replace GPT 4.
These people were miserable. Complaining about a complete personality change of their "partner", the desperation in their words seemed genuine.
1 reply →
Words are simula. They're models, not games, we do not use them as games in conversation.
> The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions
I saw a take that the AI chatbots have basically given us all the experience of being a billionaire: being coddled by sycophants, but without the billions to protect us from the consequences of the behaviors that encourages.
This. If you never train stick, you can never drive stick, just automatic. And if you never let a real person break your heart or otherwise disappoint you, you'll never be ready for real people.
AI friends need a "Disasters" menu like SimCity.
One of the first thing many Sims players do is to make a virtual version of their real boyfriend/girlfriend to torture and perform experiments on.
Ah, 'suffering builds character'. I haven't had that one in a while.
Maybe we should not want to get prepared for RealPeople™ if all they can do is break us and disappoint us.
"But RealPeople™ can also elevate, surprise, and enchant you!" you may intervene. They sure than. An still, some may decide no longer to go for new rounds of Russian roulette. Someone like that is not a lesser person, they still have real™ enjoyment in a hundred other aspects in their life from music to being a food nerd. they just don't make their happiness dependant on volatile actors.
AI chatbots as relationship replacements are, in many ways, flight simulators:
Are they 'the real thing'? Nah, sitting in a real Cessna almost always beats a computer screen and a keyboard.
Are they always a worse situation than 'the real thing'? Simulators sure beat reality when reality is 'dual engine flameout halfway over the North Pacific'
Are they cheaper? YES, significantly!
Are they 'good enough'? For many, they are.
Are they 'syncophantic'? Yes, insofar as that circumstances are decided beforehand. A 'real' pilot doesn't get to choose 'blue skies, little sheep clouds in the sky', they only get to chosen not to fly that day. And the standard weather settings? Not exactly 'hurricane, category 5'.
Are they available, while real flight is not, to some or all members of the public? Generally yes. The simulator doesn't make you have a current medical.
Are they removing pilots/humans from 'the scene'? No, not really. In fact, many pilots fly simulators for risk-free training of extreme situations.
Your argument is basically 'A flight simulator won’t teach you what it feels like when the engine coughs for real at 1000 ft above ground and your hands shake on the yoke.'. No, it doesn't. An frankly, there are experiences you can live without - especially those you may not survive (emotionally).
Society has always had the tendency to pathologize those who do not pursue a sexual relationship as lesser humans. (Especially) single women that were too happy in the medevieal age? Witches that needed burning. Guy who preferred reading to dancing? A 'weirdo and a creep'. English knows 'master' for the unmarried, 'incomplete' man, an 'mister' for the one who got married. And today? those who are incapable or unwilling to participate in the dating scene are branded 'girlfailure' or 'incel' - with the latter group considered a walking security risk. Let's not add to the stigma by playing another tune for the 'oh, everyone must get out there' scene.
One difference between "AI chatbots" in this context and common flight simulator games is that someone else is listening in and has the actual control over the simulation. You're not alone in the same way that you are when pining over a character in a television series or books, or crashing a virtual jumbo jet into a skyscraper in MICROS~1 Flight Simulator.
3 replies →
This is the exact kind of thinking that leads to this in the first place. The idea that a human relationship is, in the end, just about what YOU can get from it. That it's just simply a black box with an input and output, and if it can provide the right outputs for your needs, then it's sufficient. This materialistic thinking of other people is a fundamentally catastrophic worldview.
A meaningful relationship necessarily requires some element of giving, not just getting. The meaning comes from the exchange between two people, the feedback loop of give and take that leads to trust.
Not everyone needs a romantic relationship, but to think a chatbot could ever fulfill even 1% of the very fundamental human need of close relationships is dangerous thinking. At best, a chatbot can be a therapist or a sex toy. A one-way provider of some service, but never a relationship. If that's what is needed, then fine, but anything else is a slippery slope to self destruction.
3 replies →
Yes, great comment.
What do you think of the idea that people generally don't really like other people - that they do generally disappoint and cause suffering. (We are all imperfect, imperfectly getting along together, daily initiating and supporting acts of aggression against others.) And that, if the FakePeople™ experience were good enough, probably most people would opt out of engaging with others, similar to how most pilot experiences are on simulators?
2 replies →
Disturbing and sad.
> Maybe we should not want to get prepared for RealPeople™ if all they can do is break us and disappoint us.
Good thing that "if" is clearly untrue.
> AI chatbots as relationship replacements are, in many ways, flight simulators:
If only! It's probably closer to playing star fox than a flight sim.
1 reply →
[flagged]
Love your thoughts about needing input from others! In Autistic / ADHD circles, the lack of input from other people, and the feedback of thoughts being amplified by oneself is called rumination. It can happen for many multiple ways-- lack of social discussion, drugs, etc. AI psychosis is just rumination, but the bot expands and validates your own ideas, making them appear to be validated by others. For vulnerable people, AI can be incredibly useful, but also dangerous. It requires individuals to deliberately self-regulate, pause, and break the cycle of rumination.
> In Autistic / ADHD circles
i.e. HN comments
Nah, most circles of neurodivergent people I've been around have humility and are aware of their own fallibility.
Is this clearly AI-generated comment part of the joke?
The comment seems less clearly-written (e.g., "It can happen for many multiple ways") than how a chatbot would phrase it.
3 replies →
We’re all just in a big LLM-generated self-licking-lollipop content farm. There aren’t any actual humans left here at all. For all you know, I’m not even human. Maybe you’re not either.
... and with this, you named the entire retention model of the whole AI industry. Kudos!
I share your concerns about the risks of over-reliance on AI companions—here are three key points that resonate deeply with me:
• Firstly, these systems tend to exhibit excessively agreeable patterns, which can hinder the development of resilience in navigating authentic human conflict and growth.
• Secondly, true relational depth requires mutual independent agency and lived experience that current models simply cannot provide autonomously.
• Thirdly, while convenience is tempting, substituting genuine reciprocity with perfectly tailored responses may signal deeper unmet needs worth examining thoughtfully. Let’s all strive to prioritize real human bonds—after all, that’s what makes life meaningfully complex and rewarding!