Comment by koverstreet
15 hours ago
I think their claims of write amplification reduction are a bit overstated given more realistic workloads.
It is true that b-trees aren't ideal in that respect, and you will see some amount of write amplification, but not enough that it should be a major consideration, in my experience
You really have to take into account workingset size and cache size to make any judgements there; your b-tree writes should be given by journal/WAL reclaim, which will buffer up updates.
A purely random update workload will kill a conventional b-tree on write amplification - like I mentioned, that's the absolute worst case scenario for a b-tree. But it just doesn't happen in the real world.
For the data I can give you, that would be bcachefs's hybrid b-tree - large btree nodes (256k, typically) which are internally log structured; I would consider it a minor variation on a classical b-tree. The log structuring mean that we can incrementally write only the dirty keys in a node, at the cost of some compaction overhead (drastically less than a conventional LSM).
In actual real world usage, when I've looked at the numbers (not recently, so this may have changed) we're always able to do giant highly efficient b-tree writes - the journal and in-memory cache are batching things up as much as we want - which means write amplification is negligible.
Also you can use dense B+-Trees for reads possibly with some bloom filters or the like if you expect/profile a high fraction of negative lookups, use LSM to eventually compact, and get both SSD/ZNS friendly write patterns as well as full freedom to only compact a layer once it's finer state is no longer relevant to any MVCC/multi-phase-commit schemes. Being able to e.g. run a compression algorithm until you just exceed the storage page size, take it's state from just before it exceeded, and begin the next bundle with the entry that made you exceed the page size.... It's quite helpful when storage space or IO bandwidth is somewhat scarce.
If you're worried about the last layer being a giant unmanageably large B+-Tree, just shard it similarly in key space to not need much free temporary working space on SSD to stream the freshly compacted data to while the inputs to the compaction still serve real time queries.
Of course mileage may vary with different workloads, but are there any good benchmarks/suites to use for comparison in cases like these? They used YCSB but I don't know if those workloads ([1]) are relevant to modern/typical access patterns nor if they're applicable to SQL databases.
You thinking about running some benchmarks in a bcachefs branch (:pray:)?
I want to see this data structure prototyped in PostgreSQL.
[1]: https://github.com/brianfrankcooper/YCSB/tree/master/workloa...
I've got microbenchmarks for the bcachefs btree here: https://evilpiepirate.org/git/bcachefs.git/tree/fs/bcachefs/...
They're ancient, I only have pure random and sequential benchmarks - no zipf distribution, which really should be included.
Feel free to play around with them if you want :) I could even find the driver code, if you want.
I've always been curious about PostgreSQL's core b-tree implementation. I ran into a PostgreSQL developer at a conference once, and exchanged a few words that as I recall were enough to get me intrigued, but never learned anything about it.
In a system as big, complex and well optimized as either bcachefs or postgres, the core index implementation is no longer the main consideration - there's layers and layers, and the stuff that's fun to optimize and write paper about eventually gets buried (and you start thinking a lot more about how to lay out your data structures and less about optimizing the data structures themselves).
But you know in something like that there's going to be some clever tricks, that few people know about or even remember anymore :)
I think a better candidate to prototype would be SQLite, at least to have a better sense of how would bf-tree behave on real world