Comment by FloorEgg
8 hours ago
I'm not commenting on the core point of your comment, only the "why retain for 30 days" question.
Im an age of automated backups and failovers, deleting can be really hard. Part of the answer could simply be that syncing a delete across all the redundancies (while ensuring those redundancies are reliable when a disaster happens and they need to recover or maintain uptime) may take days to weeks. Also the 30 days could be the limit, as oppose to the average or median time it takes.
The most likely explanation is whatever storage solution they’re using has a built in “recycle bin” functionality and deleted data stays the for 30 days before it’s actually deleted. I see this a lot in very large databases. The recycle bin functionality is built in to the data store product.
That sounds very plausible.
> I'm not commenting on the core point of your comment, only the "why retain for 30 days" question. Im an age of automated backups and failovers, deleting can be really hard.
I doubt it's that. Deletion is hard, but it's not "exactly 30 days" hard.
The most likely explanation is that OpenAI wants the ability to investigate abuse and / or publicly-made claims ("ChatGPT told my underage kid to <x>!" / "ChatGPT praised Hitler!"). If they delete chats right away, they're flying blind and you can claim anything you want.
Now, whether you should have a "delete" button that doesn't really delete stuff is another question.
What is the standard way of being forced to restore from backup while ensuring deleted data does not also become restored? Is every delete request stored so that it can be replayed against any restore?
I have only had to manage this in a startup context with relatively low stakes and it was hard and messy. I don't know what best practice is at the scale that openai operates, but from my limited experience I have an intuition that the challenge is not trivial.
Also I suspect there is a big gap between best practice and common practice. My guess is common practice is dysfunctional. I would also suspect there is no standard way, but there are established practices within different technology stacks that vary between performative, barely compliant and effective at scale.
In one case I saw there was a substantial manual effort to load snapshots into instances run the delete and then save new snapshots. This was over 10 years ago though and it was more of a "we just need to get this done" than a "what's the most elegant way to do this at scale"