← Back to context

Comment by cube00

1 day ago

Thank you for sharing. I'm glad your wife and friends were able to pull you out before it was too late.

"People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies" https://news.ycombinator.com/item?id=43890649

Apparently Reddit is full of such posts. A similar genre is when the bot assures them that they did something very special: they for the first time ever awakened the AI to true consciousness and this is rare and the user is a one in a billion genius and this will change everything. And they use back and forth some physics jargon and philosophy of consciousness technical terms and the bot always reaffims how insightful the user's mishmash of those concepts are and apparently many people fall for this.

Some people are also more susceptible to various too-good-to-be-true scams without alarm bells going off, or to hypnosis or cold reading or soothsayers etc. Or even propaganda radicalization rabbit holes via recommendation algorithms.

It's probably quite difficult and shameful-feeling for someone to admit that this happened to them, so they may insist it was different or something. It's also a warning sign when a user talks about "my chatgpt" as if it was a pet they grew and that the user has awakened it and now they together explore the universe and consciousness and then the user asks for a summary writeup and they try to send it to physicists or other experts and of course they are upset when they don't recognize the genius.

  • > Some people are also more susceptible to various too-good-to-be-true scams

    Unlike a regular scam, there's an element of "boiling frog" with LLMs.

    It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in.

    I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice.

    Everyone needs to be regularly clearing their past conversations and disable saving/training.

    • Somewhat unrelated, but I also noticed chatgpt now also sees the overwritten "conversation paths", ie when you scroll back and edit one of your messages, previously the LLM would simply use the new version of that message and the original prior exchange, but anything into the future of the edited message was no longer seen by the LLM when on this new, edited path. But now it definitely knows those messages as well, it often refers to things that are clearly no longer included in the messages visible in the UI.

      3 replies →

    • Really makes me wonder if this is a reproduction of a pattern of interaction from the QA phase of LLM refinement. Either way it must be horrible to be QA for these things.