I realized after a few years of doing it that my strategy for keeping Wikis useful is to treat them as B-Trees.
When the landing page gets too full/too many outgoing links, I start pushing links and paragraphs down into the child pages, to leave space for a fair share of timely links and on-boarding docs.
Similar and older links get pushed down into the sibling that best represents the topic. Then if the destination page is now too big, similar and older links get pushed down to their children. Eventually all of the outdated docs are three levels down from the landing page, where only historians and experts will see them. And sometimes as we finally decide how part of the system really should work, siblings get combined into one page, minus the speculative work that gets pushed down deeper in the tree. It works remarkably well. At the end of the day documentation is a search problem.
I highly recommend it for a Friday afternoon exercise when you want to be productive but you know starting a new task is a complete waste of time.
Do you have a recommendation for Wiki software you like to use? My team is in need of an internal knowledge base, and I like the structure of wikis. Most of the SaaS products I've tried or looked at are a bit too shiny/fancy and don't seem to match my mental model of how a wiki-style knowledge base should work.
Oxide Computer published their Request for Discussion (RFD) site software [1] that they use for internal published documentation and discussion. Many are published to the public, but some are private. [2] They talk about how they use it and where it came from in their most recent podcast: RFDs: The Backbone of Oxide [3].
I suspect this would be a good 'substitute' for documents that are often hosted on internal wikis.
My recommendation is, if you want a wiki for developers, use something that is based on a markup language, sources pages from a git repo, and does not do too much magic behind the scenes. This will result in a more maintained and liked wiki than any bloated SaaS wiki.
I don't think it really matters which you use. I've unfortunately been stuck in Atlassian for ages.
But if you were shopping for one, from the standpoint of keeping the docs working being able see missing pages and see incoming links to a page are both pretty helpful. I kinda miss the latter.
If you can self host, wiki.js is easy to set up in a docker container.
Mediawiki (What Wikipedia runs on) is pretty easy too.
If you have a small team, Obsidian and a syncing solution like git or obsidian sync might work.
I was able to work with my company’s it team to set up a wiki which is only accessible from within the network, including by vpn, and is hosted on a vps.
Thanks for the amazing visual, Me and my team had worked on BTree+ indexing support on the top of Aerospike as we have different huge data sets 5T of data and each data set belong to an X property which suppose to have its own order table indexing.
The challenging part was evicting the expired keys from the BTree+ where the inserted keys would have TTL therefore we decided to fuse only one level branch and within the first sibling leaf nodes as it would be expensive if we perform clean up all the way up which would cause high lock contention and slow things down substantially specially when keys get inserted/deleted/updated.
Also we had to do sharding on top of the BTree+ to speed things up and reduce the high lock contention, that way we know what shard the the keys belong to and we lock the branch before performing any CUD, That way we can perform high concurrent operations on multiple shards/branches.
The clean up process might have some caveat and the Btree+ ends up unbalanced. We had to provide rebuild indexing feature so that would fix all the gaps if necessary to avoid extra clean up operations.
This is why you should _never_ make your UUID column the primary key.
For one, it's enormous. Now you have to copy that 128bit int to every side of the relation.
Two, in most cases it's completely random. Unless you had the forethought to use something other than UUIDv4 (gen_random_uuid). So now you just have a bunch of humongous random numbers clogging up your indexes and duplicated everywhere for no good reason.
Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!
> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys
These recommendations are database-specific. It is definitely true of MySQL, since its row storage hinges on the primary key (and other indices yield that primary key). To have a hot page cache, you need to always add to the same page, so an incremented primary index is preferred. A UUIDv4 secondary index would still have cold page caches though, defeating the purpose. (Unless using UUIDv7.)
But in CockroachDB, the data is distributed. Random INSERTs actually have a speedup over sequential ones, as those insertions can happen in parallel (on different machines). Similarly, SELECTs are faster if the primary index is random, because a single machine is not overloaded; it effectively acts as a load balancer.
It depends on the use cases and performance goals. You may want to distribute the rows that you insert, and then a random UUID makes sense. However, it is too much distributed for B-Tree indexes and the problem is not only cache but the amount of modifications due to leaf block splits. This includes MySQL which stores the primary key in a B-Tree index.
Other use cases may benefit from colocating the rows that are inserted together. Think of timeseries, or simply an order entry where you query the recent orders. A sequence makes sense there, to have a good correlation between the index (on time) and the primary key. This avoids too many random reads with low cache hits.
It is wrong to think that distributed databases do not need sequences. YugabyteDB allows it. With YugabyteDB you use hash sharding to distribute them to a small number of hash ranges, so that they don0t go all at the same place, but are not scattered across the whole database. CockroachDB and Spanner doesn't have hash sharding and that's why they do not recommend sequences. There are also use cases where range sharding on the sequence is good when you don't need to distribute the data ingest, but benefit from their colocation when querying.
Yeah, Spanner is also clear about this. It doesn't even have sequences, and their docs say to use random pkeys rather than time-dependent things like uuid7.
If you sort UUIDs, there will be a lot of common prefixes. B-trees implicitly sort data so you can factor common prefix from all keys of B-tree page.
But!
UUIDs are random and B-tree will have increased fragmentation after a short while.
I once tried to insert a puny 1 million scale free graph edges into a B-tree (BerkeleyDB) in 1K batches and it failed miserably - I've waited for an hour and then killed it. LSM trees were an orders of magnitude faster at 100K edges, so BDB had shown that B-trees are no match there.
B-trees are semi-static data structures, they are hard to rebuild incrementally if data is random. But they shine if input keys are sorted.
Use UUIDs as you like to use them if your storage engine is LSM tree. Use staged sorting (LSM tree in disguise) if you ue B-tree.
You're assuming that people store UUIDs as 128-bit int. That's overly generous. I know people who use varchar without a second thought, more than doubling the storage requirement!
> Won't you still need indexes on those UUIDs anyway?
Depends on how index pages are structured in the DB. With MySQL (assuming the InnoDB engine) the primary key is pretty much always a clustered index, in MS SQL Server this is the case more often than not too. This means that any page expansion due to splits from the randomness of the data being inserted affects the whole table not just the key index, and rebuilding to claim back the wasted space later will take a lot longer as you are moving all your based data around. With the UUID as a supplementary key, likely indexed on its own, all that gets enlarged by excess page splitting is those 128-bit values not the entire rows and an index rebuild to fix that up after the fact moves a lot less data around.
So yes, you have an extra index on top of your primary key and other additional ones needed by your apps and reports, but not having a random UUID as your clustering key can be a significant benefit. Using more ordered UUIDs minimises this difference considerably though.
Even ignoring the random-key-causes-page-splits issue, if you've mitigated it with more ordered UUIDs, a wider primary key increases the size of all supplementary indexes assuming MySQL arranges things similarly to SQL Server: rather than having a hidden page/row identifier (as SQL Server does without any clustered key defined, it calls such arrangements heap tables), each supplementary index includes the clustering key value with every row. So while having a 32-bit primary key for internals and the 128-bit UUID as an extra key adds 4 bytes per row (for the extra INT32) to the base data, it saves 12 bytes from each row in each non-clustered index (as the INT32 is included, not the UUID).
> And possibly have to do more joins to resolve them?
Usually not. Usually when you have both the INT32 (or sometimes IN64) surrogate key is for all internal use and a UUID only for external references, so the UUIDs are only important as the initial filter and not likely not taking part in a JOIN at all. After the initial filter to find the item you want in the main table of the query, the JOINs to collect data from other tables will all be by the surrogate (INT) keys. The UUID is almost never used as a foreign key reference in this arrangement.
Yes, if a client gives a UUID and you want to look up relations to that table, you'll have to join with that table. And the index on the UUID has its own costs. It's still better this way. If for some rare reason this join is too expensive, there are other options like using a cache or denormalizing the tables slightly. The alternative of a UUID pkey will seriously slow down every join.
One question is whether you do random or k-sorted UUIDs for a secondary key. K-sorted is likely faster, but in many cases the difference is small enough that you'd rather take the easy random route which is also guaranteed not to leak any info.
> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys.
If you are worried about size ballooning, rather than the randomness, make sure you actually need more than 32-bit standard integers (well, 31-bit as they tend to be signed and we usually start from 0 or 1 not -2,147,483,648).
Avoiding roll-over issues is sometimes a valid concern sometimes with 32-bit INTs, but I've seen people specify 64-bit surrogate keys when their data is not likely to grow to where 31 would be an issue in the next few hundred years… There is an argument that 64-bit values are more efficient in modern CPUs, but I've not seen any good tests that show a measurable difference in the context of DB keys where there are other overheads to consider due to the doubling in size.
Large keys are not a problem. See the SQLite3 docs about WITHOUT ROWID tables. Essentially any interesting multi-column key will be large, so don't worry about it. On the other hand, while INTEGER PRIMARY KEYs are nice and small, you end up with one extra index in SQLite3 if you have an integer key and also some other key that is really your primary key (or equivalently if you have the primary key you want and you didn't use WITHOUT ROWID).
A covering index always has large keys too.
In SQLite3 the trade-off made is that B-trees have either integer keys and arbitrary values, or arbitrary keys and integer values. When using WITHOUT ROWID the whole row is the key. (IIRC)
It's very interesting to me that we have to keep telling people this, but it hasn't become part of the "hive folk knowledge" we all seem to develop. I think DB vendors have been sleeping on an opportunity to encourage better practices.
DB vendors haven't done enough to offer ID generation as a core part of their system. Ideally "what ID do I use for this object" shouldn't even be a consideration, because of course the database should handle it. It is the system of record after all. Yet your options are pretty much limited to UUID or a basic incremental counter that fails to meet any real world production constraints.
Because we keep telling people the opposite in academic settings, where the pkey is usually some actual data field(s) you expect to be unique. There's some point to teaching 3NF this way, but it shouldn't be taken literally.
There are cases where it is really useful to have only universally unique IDs, e.g. if you have multi-tenant systems and at some point you need to move tenants to a different server/instance or merge tenants on the same DB.
they do mention that it’s is good to have both. also using a newer uuid version that is more sortable temporally is also wise.
> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!
Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.
In fact, you could use 512bits primary key with MySQL and still, range queries would be 10x faster and 10x less space in ram than in Postgres. This article explains why that is.
> Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.
Why do you believe a database should order rows on disk?
It has such an enticing "reject optional" button next to "accept all" and I was so impressed that they'd actually made the opt-out flow as easy as the opt-in flow... until I tried to use it. It's just maliciously incompetent at this point.
Note that, in the abstract, “vulgar” means “common” (as in “vulgar latin”). Indeed, its negative connotations come from that same sense: “common” people are unrefined.
I have been looking for something like this for so long, amazing post. I would love a section on composite indexes. That is something that I still have a hard visualizing…
I invented a new way of visualisi g composite ones when I wrote my book about indexing. You can scroll down on the landing page to the fundamentals chapter to see it.
If our disk block and B-tree node is 16k, and our keys, values, and child pointers are all 8 bits, this means we could store 682 key/values with 683 child pointers per node. A three level tree could store over 300 million key/value pairs (682 × 682 × 682 = 317,214,568). —— Should be 8 bytes per element?
I realized after a few years of doing it that my strategy for keeping Wikis useful is to treat them as B-Trees.
When the landing page gets too full/too many outgoing links, I start pushing links and paragraphs down into the child pages, to leave space for a fair share of timely links and on-boarding docs.
Similar and older links get pushed down into the sibling that best represents the topic. Then if the destination page is now too big, similar and older links get pushed down to their children. Eventually all of the outdated docs are three levels down from the landing page, where only historians and experts will see them. And sometimes as we finally decide how part of the system really should work, siblings get combined into one page, minus the speculative work that gets pushed down deeper in the tree. It works remarkably well. At the end of the day documentation is a search problem.
I highly recommend it for a Friday afternoon exercise when you want to be productive but you know starting a new task is a complete waste of time.
Do you have a recommendation for Wiki software you like to use? My team is in need of an internal knowledge base, and I like the structure of wikis. Most of the SaaS products I've tried or looked at are a bit too shiny/fancy and don't seem to match my mental model of how a wiki-style knowledge base should work.
Oxide Computer published their Request for Discussion (RFD) site software [1] that they use for internal published documentation and discussion. Many are published to the public, but some are private. [2] They talk about how they use it and where it came from in their most recent podcast: RFDs: The Backbone of Oxide [3].
I suspect this would be a good 'substitute' for documents that are often hosted on internal wikis.
[1]: https://github.com/oxidecomputer/rfd-site
[2] https://rfd.shared.oxide.computer/rfd/0001
[3]: https://oxide.computer/podcasts/oxide-and-friends/2065190
My recommendation is, if you want a wiki for developers, use something that is based on a markup language, sources pages from a git repo, and does not do too much magic behind the scenes. This will result in a more maintained and liked wiki than any bloated SaaS wiki.
1 reply →
I don't think it really matters which you use. I've unfortunately been stuck in Atlassian for ages.
But if you were shopping for one, from the standpoint of keeping the docs working being able see missing pages and see incoming links to a page are both pretty helpful. I kinda miss the latter.
If you can self host, wiki.js is easy to set up in a docker container. Mediawiki (What Wikipedia runs on) is pretty easy too.
If you have a small team, Obsidian and a syncing solution like git or obsidian sync might work.
I was able to work with my company’s it team to set up a wiki which is only accessible from within the network, including by vpn, and is hosted on a vps.
Confluence, but we're already in deep w Atlassian
Thanks for the amazing visual, Me and my team had worked on BTree+ indexing support on the top of Aerospike as we have different huge data sets 5T of data and each data set belong to an X property which suppose to have its own order table indexing.
The challenging part was evicting the expired keys from the BTree+ where the inserted keys would have TTL therefore we decided to fuse only one level branch and within the first sibling leaf nodes as it would be expensive if we perform clean up all the way up which would cause high lock contention and slow things down substantially specially when keys get inserted/deleted/updated.
Also we had to do sharding on top of the BTree+ to speed things up and reduce the high lock contention, that way we know what shard the the keys belong to and we lock the branch before performing any CUD, That way we can perform high concurrent operations on multiple shards/branches.
The clean up process might have some caveat and the Btree+ ends up unbalanced. We had to provide rebuild indexing feature so that would fix all the gaps if necessary to avoid extra clean up operations.
Again Thanks for the visuals.
You're welcome! Sharding with B tree indexes... Hmmm, I know a company that does that.
This is why you should _never_ make your UUID column the primary key.
For one, it's enormous. Now you have to copy that 128bit int to every side of the relation.
Two, in most cases it's completely random. Unless you had the forethought to use something other than UUIDv4 (gen_random_uuid). So now you just have a bunch of humongous random numbers clogging up your indexes and duplicated everywhere for no good reason.
Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!
> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys
These recommendations are database-specific. It is definitely true of MySQL, since its row storage hinges on the primary key (and other indices yield that primary key). To have a hot page cache, you need to always add to the same page, so an incremented primary index is preferred. A UUIDv4 secondary index would still have cold page caches though, defeating the purpose. (Unless using UUIDv7.)
But in CockroachDB, the data is distributed. Random INSERTs actually have a speedup over sequential ones, as those insertions can happen in parallel (on different machines). Similarly, SELECTs are faster if the primary index is random, because a single machine is not overloaded; it effectively acts as a load balancer.
It depends on the use cases and performance goals. You may want to distribute the rows that you insert, and then a random UUID makes sense. However, it is too much distributed for B-Tree indexes and the problem is not only cache but the amount of modifications due to leaf block splits. This includes MySQL which stores the primary key in a B-Tree index. Other use cases may benefit from colocating the rows that are inserted together. Think of timeseries, or simply an order entry where you query the recent orders. A sequence makes sense there, to have a good correlation between the index (on time) and the primary key. This avoids too many random reads with low cache hits.
It is wrong to think that distributed databases do not need sequences. YugabyteDB allows it. With YugabyteDB you use hash sharding to distribute them to a small number of hash ranges, so that they don0t go all at the same place, but are not scattered across the whole database. CockroachDB and Spanner doesn't have hash sharding and that's why they do not recommend sequences. There are also use cases where range sharding on the sequence is good when you don't need to distribute the data ingest, but benefit from their colocation when querying.
Yeah, Spanner is also clear about this. It doesn't even have sequences, and their docs say to use random pkeys rather than time-dependent things like uuid7.
If you sort UUIDs, there will be a lot of common prefixes. B-trees implicitly sort data so you can factor common prefix from all keys of B-tree page.
But!
UUIDs are random and B-tree will have increased fragmentation after a short while.
I once tried to insert a puny 1 million scale free graph edges into a B-tree (BerkeleyDB) in 1K batches and it failed miserably - I've waited for an hour and then killed it. LSM trees were an orders of magnitude faster at 100K edges, so BDB had shown that B-trees are no match there.
B-trees are semi-static data structures, they are hard to rebuild incrementally if data is random. But they shine if input keys are sorted.
Use UUIDs as you like to use them if your storage engine is LSM tree. Use staged sorting (LSM tree in disguise) if you ue B-tree.
You're assuming that people store UUIDs as 128-bit int. That's overly generous. I know people who use varchar without a second thought, more than doubling the storage requirement!
Won't you still need indexes on those UUIDs anyway? And possibly have to do more joins to resolve them?
> Won't you still need indexes on those UUIDs anyway?
Depends on how index pages are structured in the DB. With MySQL (assuming the InnoDB engine) the primary key is pretty much always a clustered index, in MS SQL Server this is the case more often than not too. This means that any page expansion due to splits from the randomness of the data being inserted affects the whole table not just the key index, and rebuilding to claim back the wasted space later will take a lot longer as you are moving all your based data around. With the UUID as a supplementary key, likely indexed on its own, all that gets enlarged by excess page splitting is those 128-bit values not the entire rows and an index rebuild to fix that up after the fact moves a lot less data around.
So yes, you have an extra index on top of your primary key and other additional ones needed by your apps and reports, but not having a random UUID as your clustering key can be a significant benefit. Using more ordered UUIDs minimises this difference considerably though.
Even ignoring the random-key-causes-page-splits issue, if you've mitigated it with more ordered UUIDs, a wider primary key increases the size of all supplementary indexes assuming MySQL arranges things similarly to SQL Server: rather than having a hidden page/row identifier (as SQL Server does without any clustered key defined, it calls such arrangements heap tables), each supplementary index includes the clustering key value with every row. So while having a 32-bit primary key for internals and the 128-bit UUID as an extra key adds 4 bytes per row (for the extra INT32) to the base data, it saves 12 bytes from each row in each non-clustered index (as the INT32 is included, not the UUID).
> And possibly have to do more joins to resolve them?
Usually not. Usually when you have both the INT32 (or sometimes IN64) surrogate key is for all internal use and a UUID only for external references, so the UUIDs are only important as the initial filter and not likely not taking part in a JOIN at all. After the initial filter to find the item you want in the main table of the query, the JOINs to collect data from other tables will all be by the surrogate (INT) keys. The UUID is almost never used as a foreign key reference in this arrangement.
You only need 1 index on the UUID. Instead of everywhere the UUID is referenced from other tables
Yes, if a client gives a UUID and you want to look up relations to that table, you'll have to join with that table. And the index on the UUID has its own costs. It's still better this way. If for some rare reason this join is too expensive, there are other options like using a cache or denormalizing the tables slightly. The alternative of a UUID pkey will seriously slow down every join.
One question is whether you do random or k-sorted UUIDs for a secondary key. K-sorted is likely faster, but in many cases the difference is small enough that you'd rather take the easy random route which is also guaranteed not to leak any info.
> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys.
If you are worried about size ballooning, rather than the randomness, make sure you actually need more than 32-bit standard integers (well, 31-bit as they tend to be signed and we usually start from 0 or 1 not -2,147,483,648).
Avoiding roll-over issues is sometimes a valid concern sometimes with 32-bit INTs, but I've seen people specify 64-bit surrogate keys when their data is not likely to grow to where 31 would be an issue in the next few hundred years… There is an argument that 64-bit values are more efficient in modern CPUs, but I've not seen any good tests that show a measurable difference in the context of DB keys where there are other overheads to consider due to the doubling in size.
It's not just about the data size. There are reasons your sequence can increment without a row actually being inserted.
1 reply →
Large keys are not a problem. See the SQLite3 docs about WITHOUT ROWID tables. Essentially any interesting multi-column key will be large, so don't worry about it. On the other hand, while INTEGER PRIMARY KEYs are nice and small, you end up with one extra index in SQLite3 if you have an integer key and also some other key that is really your primary key (or equivalently if you have the primary key you want and you didn't use WITHOUT ROWID).
A covering index always has large keys too.
In SQLite3 the trade-off made is that B-trees have either integer keys and arbitrary values, or arbitrary keys and integer values. When using WITHOUT ROWID the whole row is the key. (IIRC)
It's very interesting to me that we have to keep telling people this, but it hasn't become part of the "hive folk knowledge" we all seem to develop. I think DB vendors have been sleeping on an opportunity to encourage better practices.
DB vendors haven't done enough to offer ID generation as a core part of their system. Ideally "what ID do I use for this object" shouldn't even be a consideration, because of course the database should handle it. It is the system of record after all. Yet your options are pretty much limited to UUID or a basic incremental counter that fails to meet any real world production constraints.
7 replies →
Because we keep telling people the opposite in academic settings, where the pkey is usually some actual data field(s) you expect to be unique. There's some point to teaching 3NF this way, but it shouldn't be taken literally.
There are cases where it is really useful to have only universally unique IDs, e.g. if you have multi-tenant systems and at some point you need to move tenants to a different server/instance or merge tenants on the same DB.
they do mention that it’s is good to have both. also using a newer uuid version that is more sortable temporally is also wise.
> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!
Isn’t there also hash based index for random keys?
Maybe? idk. Not in Postgres. The default index is a B-Tree. A hash-based index would be terrible for disk-seeking, in any case.
3 replies →
Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.
In fact, you could use 512bits primary key with MySQL and still, range queries would be 10x faster and 10x less space in ram than in Postgres. This article explains why that is.
Clustered tables and sequential keys have their own downsides though like lock contention on the "last" page.
1 reply →
> Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.
Why do you believe a database should order rows on disk?
3 replies →
The cookie modal doesn't work on Firefox mobile and it takes half the height.
Why don't let the user set that up on their browser
It has such an enticing "reject optional" button next to "accept all" and I was so impressed that they'd actually made the opt-out flow as easy as the opt-in flow... until I tried to use it. It's just maliciously incompetent at this point.
The "reject all" button works better now, thanks for bringing this up.
1 reply →
I'm sorry about this! working on a fix.
> it takes half the height.
Ah so that's what the "planet scale" name refers to /j
It takes all the screen space from latitude 0° to -90°!!!
Nor Chrome desktop
Or Chromium mobile
beautiful interactive visualizations, this is top shelf in terms of pedagogy and vulgarization
Slightly off topic: Learnt a new word today 'vulgarization' which seems to have a completely different meaning from the obvious. Thanks.
Note that, in the abstract, “vulgar” means “common” (as in “vulgar latin”). Indeed, its negative connotations come from that same sense: “common” people are unrefined.
6 replies →
That's the goal! Thanks for the kind words.
I have been looking for something like this for so long, amazing post. I would love a section on composite indexes. That is something that I still have a hard visualizing…
I invented a new way of visualisi g composite ones when I wrote my book about indexing. You can scroll down on the landing page to the fundamentals chapter to see it.
https://sqlfordevs.com/ebooks/indexing
Thanks Sean! Yeah that would be very cool to have an interactive visual for that as well. So many possibilities!
Awesome article!
I only wished that the reference to InnoDB storing data in the B tree itself is otherwise referred to as a clustered index.
MyISAM before it was non-clustered.
Oracle and others let you choose.
Good point! Yes, clustered index is one of the correct terms.
great piece of education. The interactive demos like these help a lot.
If our disk block and B-tree node is 16k, and our keys, values, and child pointers are all 8 bits, this means we could store 682 key/values with 683 child pointers per node. A three level tree could store over 300 million key/value pairs (682 × 682 × 682 = 317,214,568). —— Should be 8 bytes per element?
may I ask what the v0, v1, ...v10 mean in those graphs? different pages?
I think it's "value #0", "value #1", etc. Confused me too.
It's meant to represent the values associated with the keys being inserted. Having the "v" for "value" there helps to differentiate it.
[dead]
[flagged]
that's not meant to happen. we will fix.
wow! That was honestly even worse than you made it out to be.