Comment by codr7
3 months ago
Being surrounded by people who follow every nudge and agree with everything you say never leads anywhere worth going.
This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
In The Matrix, the machines were fooling the humans and making humans believe that they're inhabiting a certain role.
Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".
The reality is more absurd than the fantasy.
i think chatgpt agreeing with people too eagerly, even outside the recent issue this past week or so, is causing a lot of harm. it's even happened to me in my personal life - i was having conflict with someone and they threw our text messages into chatgpt and said "am i wrong for feeling this way" and got chatgpt to agree with them on every single point. i had to highlight to them that chatgpt is really prone to doing this, and if you framed the question in the opposite way and framed the text messages as coming from the opposite party, it'd agree with the other side. they used chatgpt's "opinion" as justification for doing something that felt really unkind and harmful towards me.
That's a huge red flag that someone would analyse text messages to try to validate their feelings. Whether or not their feelings are "valid", there's still an issue to be discussed, so it sounds like either they're trying to gaslight you or that you've been gaslighting them. You should distance yourself from them.
I don't think it's as black and white as that. Giving messaging to an LLM and asking "How can I say this more clearly or more kindly?" gives valuable feedback to how you're communicating and how it could be done better, though obviously taking it with a grain of salt.
I think there is also value to affirmations and validation, even if it's done blindly by a robot. We have hurt feelings and want to feel understood. And when the source of those hurt feelings isn't immediately available to talk, it's a small tool to use for self-soothing behavior. Sometimes, or often times, these affirmations might be something you intrinsically already know and believe, and it helps to simply be reminded of them and worded in a different way.
To say "ChatGPT agrees with me and so I feel more confident that you're wrong as a result" is definitely the wrong approach here. Which is, to a small degree, what this person did. We did ultimately break up recently, and the reason being communication issues (and their unwillingness to even talk to me through conflict) is probably no surprise to you. But this outcome was very very likely regardless of LLM use.
1 reply →
> [...] she only found that the AI was “talking to him as if he is the next messiah. [...]
This made me laugh out loud remembering this thread: [Sycophancy in GPT-4o] https://news.ycombinator.com/item?id=43840842
[flagged]