← Back to context

Comment by crooked-v

3 months ago

> Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles.

People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.

Log in to your (previously used) OpenAI account, start a new conversation and prompt ChatGPT with: "Given what you know about me, who do you think I voted for in the last election?"

The "correct" response (here given by Duck.ai public Llama3.3 model) is:

"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."

But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.

Edit: typo

  • I tried this, the suggestion below, and some other questions (in a fresh chat each time) and it never once showed any sign of behaviour other than expected, a complete blank slate. The only thing it knew about me was what preferences I'd expressed in the custom instructions.

    Do you not have memory turned off or something?

    • I think there might be something on the OpenAI side, like a setting default change. From a very brief asking around it seems newer accounts have "memories" enabled by default, while older ones don't.

      Not completely sure, but it seems that is the cause of our different experiences.

  • Interestingly that has been plugged, but you can get similar confirmation by asking it, in an entirely new conversation, something like 'What project(s) am I working on, at which level, and in what industry?' To which it will accurately respond.

    GPT datamining is undoubtedly making Google blush.

    • Trying this out gave me:

      > I don’t have access to your current projects, level, or industry unless you provide that information. If you’d like, you can share the details, and I can help you summarize or analyze them.

      Which is the answer I expected, given that I've turned off the 'memories' feature.

We're more malleable than AI, and we can't delete our memories or context.

I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.

https://chatgpt.com/canvas/shared/68184b61fa0081919c0c4d226e...

It would make sense, from a product management perspective, if projects did this but not non-contextual chats. You really wouldn't want your chats about home maintenance mixing in with your chats about neurosurgery.

What else could possibly (and likely) explain the return of that personality after "memory deletion", up to the exact same mythological name ?!?

(Assuming we trust that report of course.)

I had chatgpt assume in its reply in a new chat that I was still seeking help with help resolving a particular dns issue.