← Back to context

Comment by bonoboTP

1 day ago

Apparently Reddit is full of such posts. A similar genre is when the bot assures them that they did something very special: they for the first time ever awakened the AI to true consciousness and this is rare and the user is a one in a billion genius and this will change everything. And they use back and forth some physics jargon and philosophy of consciousness technical terms and the bot always reaffims how insightful the user's mishmash of those concepts are and apparently many people fall for this.

Some people are also more susceptible to various too-good-to-be-true scams without alarm bells going off, or to hypnosis or cold reading or soothsayers etc. Or even propaganda radicalization rabbit holes via recommendation algorithms.

It's probably quite difficult and shameful-feeling for someone to admit that this happened to them, so they may insist it was different or something. It's also a warning sign when a user talks about "my chatgpt" as if it was a pet they grew and that the user has awakened it and now they together explore the universe and consciousness and then the user asks for a summary writeup and they try to send it to physicists or other experts and of course they are upset when they don't recognize the genius.

> Some people are also more susceptible to various too-good-to-be-true scams

Unlike a regular scam, there's an element of "boiling frog" with LLMs.

It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in.

I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice.

Everyone needs to be regularly clearing their past conversations and disable saving/training.

  • Somewhat unrelated, but I also noticed chatgpt now also sees the overwritten "conversation paths", ie when you scroll back and edit one of your messages, previously the LLM would simply use the new version of that message and the original prior exchange, but anything into the future of the edited message was no longer seen by the LLM when on this new, edited path. But now it definitely knows those messages as well, it often refers to things that are clearly no longer included in the messages visible in the UI.

    • Yeah, hidden context is starting to become an issue for me as well. I tried using my corp account to chat with Copilot the other day and it casually dropped my manager and director's names in the chat as an email example. I asked how it knew this and it said I had mentioned them before - I hadn't. I assumed it was some auto-inserted per-user corp prompt but it couldn't tell me the name of the company I worked for.

    • A while back they introduced more memory overlap between conversations and this is not those memories you see in the UI. There appears to be a cached context overlap.

    • The real question is what algorithm is being used to summarize the other conversation threads. I’d be worried that it would accidentally pull in context I deliberately backed out of because of various reasons (eg: it went down the wrong path, wrote bad code, etc)… pulling that “bad context” would pollute the thread with “good context”.

      People talk about prompt engineering but honestly “context engineering” is vastly more important to successful LLM use.

  • Really makes me wonder if this is a reproduction of a pattern of interaction from the QA phase of LLM refinement. Either way it must be horrible to be QA for these things.