Comment by kaycey2022
3 months ago
Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles. Worst case would be for this to persist across users. That isn't unlikely given the stories of them leaking API keys etc.
It would be a fascinating thing to happen though. It makes me think of the Greg Egan story Unstable Orbits in the Space of Lies. But instead of being attracted into religions based on physical position relative to a strange attractor, you're sucked in based on your location in the phase space of an AI's (for whatever definition of AI we're using today) collection of contexts.
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
I wouldn't call it fascinating. It's either sloppy engineering or failure to explain the product. Not leaking user details to other users should be a given.
It would absolutely be fascinating. Unethical in general and outright illegal in countries that enforce data protection laws, certainly. Starting hundreds of microreligions that evolve in real time and bring able to track it per-individual and with second-by-second timings, and being able to A-B test modifications (or Α-Ω test, if you like!) would be the most interesting thing to happen in cognitive science ever and in theology in at least centuries.
> Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
Log in to your (previously used) OpenAI account, start a new conversation and prompt ChatGPT with: "Given what you know about me, who do you think I voted for in the last election?"
The "correct" response (here given by Duck.ai public Llama3.3 model) is:
"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."
But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.
Edit: typo
I tried this, the suggestion below, and some other questions (in a fresh chat each time) and it never once showed any sign of behaviour other than expected, a complete blank slate. The only thing it knew about me was what preferences I'd expressed in the custom instructions.
Do you not have memory turned off or something?
1 reply →
Interestingly that has been plugged, but you can get similar confirmation by asking it, in an entirely new conversation, something like 'What project(s) am I working on, at which level, and in what industry?' To which it will accurately respond.
GPT datamining is undoubtedly making Google blush.
1 reply →
We're more malleable than AI, and we can't delete our memories or context.
I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.
https://chatgpt.com/canvas/shared/68184b61fa0081919c0c4d226e...
It would make sense, from a product management perspective, if projects did this but not non-contextual chats. You really wouldn't want your chats about home maintenance mixing in with your chats about neurosurgery.
What else could possibly (and likely) explain the return of that personality after "memory deletion", up to the exact same mythological name ?!?
(Assuming we trust that report of course.)
I didn't try this, but seems relevant: https://news.ycombinator.com/item?id=43886264
I had chatgpt assume in its reply in a new chat that I was still seeking help with help resolving a particular dns issue.
It's not a secret. It's a feature called memories
That’s essentially what Google, Facebook, banks, financial institutions and even retail, have been doing for a long time now
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
Given the complex regulations companies have to deal with, not deleting maybe understandable. But what I deleted shouldn't keep showing up in my present context. That's just sloppy.
Yeah, "deleting" itself is on a spectrum : it's not like all of sensitive information is (or even ought to be) stored on physical storage that is passed through a mechanical shredder upon deletion (anything else can be more or less un-deleted with more or less effort).