Comment by merb
24 days ago
> 3.4 Lazy fsync by Default
Why? Why do some databases do that? To have better performance in benchmarks? It’s not like that it’s ok to do that if you have a better default or at least write a lot about it. But especially when you run stuff in a small cluster you get bitten by stuff like that.
It's not just better performance on latency benchmarks, it likely improves throughput as well because the writes will be batched together.
Many applications do not require true durability and it is likely that many applications benefit from lazy fsync. Whether it should be the default is a lot more questionable though.
It’s like using a non-cryptographically secure RNG: if you don’t know enough to look for the fsync flag off yourself, it’s unlikely you know enough to evaluate the impact of durability on your application.
> if you don’t know enough to look for the fsync flag off yourself,
Yeah, it should use safe-defaults.
Then you can always go read the corners of the docs for the "go faster" mode.
Just like Postgres's infamous "non-durable settings" page... https://www.postgresql.org/docs/18/non-durability.html
You can batch writes while at the same time not acknowledging them to clients until they are flushed, it just takes more bookkeeping.
I also think fsync before acking writes is a better default. That aside, if you were to choose async for batching writes, their default value surprises me. 2 minutes seems like an eternity. Would you not get very good batching for throughout even at something like 2 seconds too? Still not safe, but safer.
For transactional durability, the writes will definitely be batched ("group commit"), because otherwise throughput would collapse.
> Many applications do not require true durability
Pretty much no application requires true durability.
Maybe what's confusing here is "true durability" but most people want to know that when data is committed that they can reason about the durability of that data using something like a basic MTBF formula - that is, your durability is "X computers of Y total have to fail at the same time, at which point N data loss occurs". They expect that as the number Y goes up, X goes up too.
When your system doesn't do things like fsync, you can't do that at all. X is 1. That is not what people expect.
Most people probably don't require X == Y, but they may have requirements that X > 1.
2 replies →
I always wondered why the fsync has to be lazy. It seems like the fsync's can be bundled up together, and the notification messages held for a few millis while the write completes. Similar to TCP corking. There doesn't need to be one fsync per consensus.
Yes, good call! You can batch up multiple operations into a single call to fsync. You can also tune the number of milliseconds or bytes you're willing to buffer before calling `fsync` to balance latency and throughput. This is how databases like Postgres work by default--see the `commit_delay` option here: https://www.postgresql.org/docs/8.1/runtime-config-wal.html
> This is how databases like Postgres work by default--see the `commit_delay` option here: https://www.postgresql.org/docs/8.1/runtime-config-wal.html
I must note that the default for Postgres is that there is NO delay, which is a sane default.
> You can batch up multiple operations into a single call to fsync.
Ive done this in various messaging implementations for throughput, and it's actually fairly easy to do in most languages;
Basically, set up 1-N writers (depends on how you are storing data really) that takes a set of items containing the data to be written alongside a TaskCompletionSource (Promise in Java terms), when your stuff wants to write it shoots it to that local queue, the worker(s) on the queue will write out messages in batches based on whatever else (i.e. tuned for write size, number of records, etc for both throughput and guaranteeing forward progress,) and then when the write completes you either complete or fail the TCS/Promise.
If you've got the right 'glue' with your language/libraries it's not that hard; this example [0] from Akka.NET's SQL persistence layer shows how simple the actual write processor's logic can be... Yeah you have to think about queueing a little bit however I've found this basic pattern very adaptable (i.e. queueing op can just send a bunch of ready-to-go-bytes and you work off that for threshold instead, add framing if needed, etc.)
[0] https://github.com/akkadotnet/Akka.Persistence.Sql/blob/7bab...
2 replies →
In some contexts (interrupts) we would call this "coalescing." (I don't work in databases, can't comment about terminology there.)
That was my immediate thought as well, under the assumption the lazy fsync is for performance. I imagine in some situations, delaying the write until the write confirmation actually happens is okay (depending on delay), but it also occurred to me that if you delay enough, and you have a busy enough system, and your time to send the message is small enough, the number of open connections you need to keep open can be some small or large multiple of the amount you would need without delaying the confirmation message to actual write time.
In practice, there must be a delay (from batching) if you fsync every transaction before acknowledging commit. The database would be unusably slow otherwise.
Right, I think the lazy thing implies that it would happen post "commit" being returned to the client, but it doesn't need to be. The commit just needs to be wait for "an" fsync call, not its own.
One of the perks of being distributed, I guess.
The kind of failure that a system can tolerate with strict fsync but can't tolerate with lazy fsync (i.e. the software 'confirms' a write to its caller but then crashes) is probably not the kind of failure you'd expect to encounter on a majority of your nodes all at the same time.
It is if they’re in the same physical datacenter. Usually the way this is done is to wait for at least M replicas to fsync, but only require the data to be in memory for the rest. It smooths out the tail latencies, which are quite high for SSDs.
> It smooths out the tail latencies, which are quite high for SSDs.
I'm sorry, tail latencies are high for SSDs? In my experience, the tail latencies are much higher for traditional rotating media (tens of seconds, vs 10s of milliseconds for SSDs).
2 replies →
You can push the safety envelope a bit further and wait for your data to only be in memory in N separate fault domains. Yes, your favorite ultra-reliable cloud service may be doing this.
> To have better performance in benchmarks
Yes, exactly.
Massively improves benchmark performance. Like 5-10x
/dev/null is even faster.
/dev/null tends to lose a lot more data.
2 replies →
durability through replication and distribution and better throughput to build up more within the window on a lazy fsync