Comment by MaxGabriel
13 hours ago
This might stem from the domain I work in (banking), but I have the opposite take. Soft delete pros to me:
* It's obvious from the schema: If there's a `deleted_at` column, I know how to query the table correctly (vs thinking rows aren't DELETEd, or knowing where to look in another table)
* One way to do things: Analytics queries, admin pages, it all can look at the same set of data, vs having separate handling for historical data.
* DELETEs are likely fairly rare by volume for many use cases
* I haven't found soft-deleted rows to be a big performance issue. Intuitively this should be true, since queries should be O log(N)
* Undoing is really easy, because all the relationships stay in place, vs data already being moved elsewhere (In practice, I haven't found much need for this kind of undo).
In most cases, I've really enjoyed going even further and making rows fully immutable, using a new row to handle updates. This makes it really easy to reference historical data.
If I was doing the logging approach described in the article, I'd use database triggers that keep a copy of every INSERT/UPDATE/DELETEd row in a duplicate table. This way it all stays in the same database—easy to query and replicate elsewhere.
Soft deletes in banking are just a Band-Aid to the much bigger problem of auditability. You may keep the original record by soft deleting it, but if you don't take care of amends, you will still lose auditability. The correct way is to use EventSourcing, with each change to an otherwise immutable state being recorded as an Event, including a Delete (both of an Event and the Object). This is even more problematic from a performance sense, but Syncs and Snapshots are for that exact purpose - or you can back the main table with a separate events table, with periodic "reconstruct"s.
> DELETEs are likely fairly rare by volume for many use cases
All your other points make sense, given this assumption.
I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.
> Undoing is really easy
Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.
In short, there are cases when soft-deletion works well, and is a good approach. In other cases it does not, and is not. Analysis is needed before adopting it.
If only 50-70% of your data is dead and causing issues then you probably have an underlying indexing issue anyhow (because scaling to 2x-3x customers would cause the same issues by magnitude).
That said, we've had soft-deletes and during discussions of keeping it on one argument was that it was really only a half-assed measure (data lost due to updates rather than deletes aren't really saved)
> I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.
I think we largely need support for "soft deletes" to be baked into SQL or its dialects directly and treated as something transparent (selecting soft deleted rows = special case, regular selects skip those rows; support for changing regular DELETE statements into doing soft deletes under the hood).
https://news.ycombinator.com/item?id=41272903
And then make dynamically sharding data by deleted/not deleted really easy to configure.
You soft deleted a few rows? They get moved to another DB instance, an archive/bin of sorts. Normal queries wouldn't even consider it, only when you explicitly try to select soft deleted rows would it be reached out to.
Well, Microsoft SQL Server has built-in Temporal Tables [1], which even take this one step further: they track all data changes, such that you can easily query them as if you were viewing them in the past. You can not only query deleted rows, but also the old versions of rows that have been updated.
(In my opinion, replicating this via a `validity tstzrange` column is also often a sane approach in PostgreSQL, although OP's blog post doesn't mention it.)
[1]: https://learn.microsoft.com/en-us/sql/relational-databases/t...
> I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.
Depending on your use-case, having soft-deletes doesn't mean you can't clean out old deleted data anyway. You may want a process that grabs all data soft-deleted X years ago and just hard-delete it.
> Depends on whether undoing even happens, and whether the act of deletion and undeletion require audit records anyway.
Yes but this is no more complex than the current situation, where you have to always create the audit records.
Agreed. And if deletes are soft, you likely really just wanted a complete audit history of all updates too (at least that's for the cases I've been part of). And then performance _definitely_ would suffer if you don't have a separate audit/archive table for all of those.
I mean, yes, growth forever doesn't tend to work.
I've seen a number of apps that require audit histories work on a basis where they are archived at a particular time, and that's when the deletes occurred and indexes fully rebuilt. This is typically scheduled during the least busy time of the year as it's rather IO intensive.
1 reply →
> I've seen tables where 50%-70% were soft-deleted, and it did affect the performance noticeably.
At that point you should probably investigate partitioning or data warehousing.
What would be the benefit of data warehousing in this case?
1 reply →
50-70% as the worst case isn't even necessarily that bad.
(Again, a lot is O(log n) right?)
The core system at my previous employer (an insurance company) worked along the lines of the solution you outline at the end: each table is an append only log of point in time information about some object. So the current state is in the row with the highest timestamp, and all previous stars can be observed with appropriate filters. It’s a really powerful approach.
If you're implementing immutable DB semantics maybe you should consider Datomic or alternatives because then you get that for free, for everything, and you also get time travel which is an amazing feature on top. It lets you be able to see the full, coherent state of the DB at any moment!
DELETEs are likely fairly rare by volume for many use cases
I think one of our problems is getting users to delete stuff they don’t need anymore.
I never got to test this, but I always wanted to explore in postgres using table partitions to store soft deleted items in a different drive as a kind of archived storage.
I'm pretty sure it is possible, and it might even yield some performance improvements.
That way you wouldn't have to worry about deleted items impacting performance too much.
It's definitely an interesting approach but the problem is now you have to change all your queries and undeleting get more complicated. There are strong trade-offs with almost all the approaches I've heard of.
With partitioning? No you don't. It gets a bit messy if you also want to partition a table by other values (like tenant id or something), since then you probably need to get into using table inheritance instead of the easier declarative partitioning - but either technique just gives you a single effective table to query.
4 replies →
IDK if the different drive is necessary, but yes partitioning on a deleted field would work.
Memory >>>>> Disk in importance.
I have worked with databases my entire career. I hate triggers with a passion. The issue is no one “owns” or has the authority to keep triggers clean. Eventually triggers become a dumping ground for all sorts of nasty slow code.
I usually tell people to stop treating databases like firebase and wax on/wax off records and fields willy nilly. You need to treat the database as the store of your business process. And your business processes demand retention of all requests. You need to keep the request to soft delete a record. You need to keep a request to undelete a record.
Too much crap in the database, you need to create a field saying this record will be archived off by this date. On that date, you move that record off into another table or file that is only accessible to admins. And yes, you need to keep a record of that archival as well. Too much gunk in your request logs? Well then you need to create an archive process for that as well.
These principles are nothing new. They are in line with “Generally Accepted Record Keeping Principles” which are US oriented. Other countries have similar standards.
What you describe is basically event sourcing, which is definitely popular. However, for OLAP, you will still want a copy of your data that only has the actual dimensions of interest, and not their history - and the easiest way to create that copy and to keep it in sync with your events is via triggers.