Comment by rtkwe
13 days ago
4o had some notable problems with sycophancy being very very positive about the user and going along with almost anything the user said. OpenAI even talked about it [0] and the new responses to people trying to continue their former 'relationship' does tend towards being 'harsh' [1] especially if you were a person actually thinking of the bot as a kind of person.
[0] https://openai.com/index/sycophancy-in-gpt-4o/
[1] https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...
It really does give a lot of signal[1] to people in the dating scene: validate and enthusiastically respond to potential romantic partners and the world is your oyster.
1. possibly/probably not in a good or healthy way? idk
From the viewpoint of self psychology people are limited in their ability to seduce because they have a self. You can't maintain perfect mirroring because you get tired, their turn-on is your squick, etc. In the early stage of peak ensorcelement (limerence) people don't see the "small signals", they miss the microexpressions, sarcastic leaks, etc. -- they see what they want to see. But eventually that wears out.
It can be puzzling that people fall for "romance scams" with people whose voice they haven't even heard but actually it's actually a safer space for that kind of seducer to operate because the low-fi channel avoids all sort of information leaks.
Enthusiastically matching the energy of an anxiously attached partner is a rite of passage many would rather not have walked.
That's a pretty fair point to what might explain why AI relationships are so appealing to some people.
I'd be a fun observational study to survey folks in AI relations and see if anxious attachments are over-represented.