Comment by thaumasiotes

1 day ago

I'm not sure what details would add. What happened:

1. I engaged with Gemini.

2. I found the results wanting, and pasted them into comment threads elsewhere on the internet, observing that they tended to support the common criticism of LLMs as being "meaning-blind".

3. Later, I went back and viewed the "history" of my "saved" session.

4. My prompts were not changed, but the responses from Gemini were different. Because of the comment threads, it was easy for me to verify that I was remembering the original exchange correctly and Google was indulging in some revision of history.

If this is verifiably true, you should contact a journalist. Meaning if it's still in your Gemini history and the comments you posted are still up.

This would be a major tech news story. "Google LLM rewriting user history" would be a scandal. And since online evidence is used in court, it could have significant legal implications. You'd be helping people.

This is much too important to merely be a comment on HN.

Interesting! Would you be willing to publicly share what prompts, and what initial and "regenerated" responses you obtained?

The fact that this happened and that you have evidence of it make it enormously interesting even if the actual substance of the prompts and the response are mundane as hell. Please post.

Not trying to excuse google but wonder why that happens. I have had my own issues with ChatGPT memory but that's more like it forgets the context and spits out something gibberish at a later invocation counter to what it said earlier in the thread. But that's because it is buggy.

Rewriting history requires computes which is more malicious. Why would someone burn compute to rewrite your stuff given that rewrites are not free? Once again not defending google trying to think through what's going on.

  • Maybe they use some kind of response caching to save resources and the original pointer is now pointing to a newer response to the same question? Still would be an insane way to do that for a history log unless they're trying to memory hole previous instances of past poor performances or wrong think.

  • My best guess is that when they changed the model backing "Gemini" they regenerated the conversations.

    I can't think of any reason it would make sense to do that, though.