Comment by cube00
3 days ago
> Some people are also more susceptible to various too-good-to-be-true scams
Unlike a regular scam, there's an element of "boiling frog" with LLMs.
It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in.
I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice.
Everyone needs to be regularly clearing their past conversations and disable saving/training.
Somewhat unrelated, but I also noticed chatgpt now also sees the overwritten "conversation paths", ie when you scroll back and edit one of your messages, previously the LLM would simply use the new version of that message and the original prior exchange, but anything into the future of the edited message was no longer seen by the LLM when on this new, edited path. But now it definitely knows those messages as well, it often refers to things that are clearly no longer included in the messages visible in the UI.
Yeah, hidden context is starting to become an issue for me as well. I tried using my corp account to chat with Copilot the other day and it casually dropped my manager and director's names in the chat as an email example. I asked how it knew this and it said I had mentioned them before - I hadn't. I assumed it was some auto-inserted per-user corp prompt but it couldn't tell me the name of the company I worked for.
A while back they introduced more memory overlap between conversations and this is not those memories you see in the UI. There appears to be a cached context overlap.
The real question is what algorithm is being used to summarize the other conversation threads. I’d be worried that it would accidentally pull in context I deliberately backed out of because of various reasons (eg: it went down the wrong path, wrote bad code, etc)… pulling that “bad context” would pollute the thread with “good context”.
People talk about prompt engineering but honestly “context engineering” is vastly more important to successful LLM use.
Really makes me wonder if this is a reproduction of a pattern of interaction from the QA phase of LLM refinement. Either way it must be horrible to be QA for these things.