Comment by benwilber0

2 years ago

This is why you should _never_ make your UUID column the primary key.

For one, it's enormous. Now you have to copy that 128bit int to every side of the relation.

Two, in most cases it's completely random. Unless you had the forethought to use something other than UUIDv4 (gen_random_uuid). So now you just have a bunch of humongous random numbers clogging up your indexes and duplicated everywhere for no good reason.

Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!

> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys

These recommendations are database-specific. It is definitely true of MySQL, since its row storage hinges on the primary key (and other indices yield that primary key). To have a hot page cache, you need to always add to the same page, so an incremented primary index is preferred. A UUIDv4 secondary index would still have cold page caches though, defeating the purpose. (Unless using UUIDv7.)

But in CockroachDB, the data is distributed. Random INSERTs actually have a speedup over sequential ones, as those insertions can happen in parallel (on different machines). Similarly, SELECTs are faster if the primary index is random, because a single machine is not overloaded; it effectively acts as a load balancer.

  • It depends on the use cases and performance goals. You may want to distribute the rows that you insert, and then a random UUID makes sense. However, it is too much distributed for B-Tree indexes and the problem is not only cache but the amount of modifications due to leaf block splits. This includes MySQL which stores the primary key in a B-Tree index. Other use cases may benefit from colocating the rows that are inserted together. Think of timeseries, or simply an order entry where you query the recent orders. A sequence makes sense there, to have a good correlation between the index (on time) and the primary key. This avoids too many random reads with low cache hits.

    It is wrong to think that distributed databases do not need sequences. YugabyteDB allows it. With YugabyteDB you use hash sharding to distribute them to a small number of hash ranges, so that they don0t go all at the same place, but are not scattered across the whole database. CockroachDB and Spanner doesn't have hash sharding and that's why they do not recommend sequences. There are also use cases where range sharding on the sequence is good when you don't need to distribute the data ingest, but benefit from their colocation when querying.

  • Yeah, Spanner is also clear about this. It doesn't even have sequences, and their docs say to use random pkeys rather than time-dependent things like uuid7.

If you sort UUIDs, there will be a lot of common prefixes. B-trees implicitly sort data so you can factor common prefix from all keys of B-tree page.

But!

UUIDs are random and B-tree will have increased fragmentation after a short while.

I once tried to insert a puny 1 million scale free graph edges into a B-tree (BerkeleyDB) in 1K batches and it failed miserably - I've waited for an hour and then killed it. LSM trees were an orders of magnitude faster at 100K edges, so BDB had shown that B-trees are no match there.

B-trees are semi-static data structures, they are hard to rebuild incrementally if data is random. But they shine if input keys are sorted.

Use UUIDs as you like to use them if your storage engine is LSM tree. Use staged sorting (LSM tree in disguise) if you ue B-tree.

You're assuming that people store UUIDs as 128-bit int. That's overly generous. I know people who use varchar without a second thought, more than doubling the storage requirement!

Won't you still need indexes on those UUIDs anyway? And possibly have to do more joins to resolve them?

  • > Won't you still need indexes on those UUIDs anyway?

    Depends on how index pages are structured in the DB. With MySQL (assuming the InnoDB engine) the primary key is pretty much always a clustered index, in MS SQL Server this is the case more often than not too. This means that any page expansion due to splits from the randomness of the data being inserted affects the whole table not just the key index, and rebuilding to claim back the wasted space later will take a lot longer as you are moving all your based data around. With the UUID as a supplementary key, likely indexed on its own, all that gets enlarged by excess page splitting is those 128-bit values not the entire rows and an index rebuild to fix that up after the fact moves a lot less data around.

    So yes, you have an extra index on top of your primary key and other additional ones needed by your apps and reports, but not having a random UUID as your clustering key can be a significant benefit. Using more ordered UUIDs minimises this difference considerably though.

    Even ignoring the random-key-causes-page-splits issue, if you've mitigated it with more ordered UUIDs, a wider primary key increases the size of all supplementary indexes assuming MySQL arranges things similarly to SQL Server: rather than having a hidden page/row identifier (as SQL Server does without any clustered key defined, it calls such arrangements heap tables), each supplementary index includes the clustering key value with every row. So while having a 32-bit primary key for internals and the 128-bit UUID as an extra key adds 4 bytes per row (for the extra INT32) to the base data, it saves 12 bytes from each row in each non-clustered index (as the INT32 is included, not the UUID).

    > And possibly have to do more joins to resolve them?

    Usually not. Usually when you have both the INT32 (or sometimes IN64) surrogate key is for all internal use and a UUID only for external references, so the UUIDs are only important as the initial filter and not likely not taking part in a JOIN at all. After the initial filter to find the item you want in the main table of the query, the JOINs to collect data from other tables will all be by the surrogate (INT) keys. The UUID is almost never used as a foreign key reference in this arrangement.

  • You only need 1 index on the UUID. Instead of everywhere the UUID is referenced from other tables

  • Yes, if a client gives a UUID and you want to look up relations to that table, you'll have to join with that table. And the index on the UUID has its own costs. It's still better this way. If for some rare reason this join is too expensive, there are other options like using a cache or denormalizing the tables slightly. The alternative of a UUID pkey will seriously slow down every join.

    One question is whether you do random or k-sorted UUIDs for a secondary key. K-sorted is likely faster, but in many cases the difference is small enough that you'd rather take the easy random route which is also guaranteed not to leak any info.

> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys.

If you are worried about size ballooning, rather than the randomness, make sure you actually need more than 32-bit standard integers (well, 31-bit as they tend to be signed and we usually start from 0 or 1 not -2,147,483,648).

Avoiding roll-over issues is sometimes a valid concern sometimes with 32-bit INTs, but I've seen people specify 64-bit surrogate keys when their data is not likely to grow to where 31 would be an issue in the next few hundred years… There is an argument that 64-bit values are more efficient in modern CPUs, but I've not seen any good tests that show a measurable difference in the context of DB keys where there are other overheads to consider due to the doubling in size.

  • It's not just about the data size. There are reasons your sequence can increment without a row actually being inserted.

    • Yep.

      # Transactions that are explicitly rolled back, which have inserted to a table with an IDENTITY (or equivalent) column, almost always result in such a gap. Though if this is happening enough that it pushes the count up by an order of magnitude over time, you probably have a design problem…

      # To reduce locking issues on the counter many DBs hold a cache of values (SQL Server allocates 1,000 at a time) in the persisted count so after a shutdown, especially an unclean shutdown, or a fail-over if using one of the clustering/mirroring/options options for HA, it is possible that most of these will be skipped. If this is happening enough to be a long-term problem than you have significant infrastructure issues. Such a cache could be per-thread/-process (IIRC it isn't in SQL Server) which is one of the reasons that IDENTITY column values can seem to be out-of-order in high concurrency environments (though you should never care about the actual order of values in such columns so that is a non-issue).

      # In some DR processes where old DBs are restored, the values may be artificially updated after restore if used values might have been exposed to an external service (to avoid confusion caused is a number is used twice from the point of view of that external service). This is an extra argument against exposing internal ID values to external systems and in favour of having an additional UUID based identifier for such purposes.

      # Also, though this is the most obvious one and hardly needs stating, deleted rows won't “return” their now unused number to be reused. You should scale the type based on rows expected to be inserted, not rows expected to be kept long-term.

      If you are expecting hundreds of millions of rows in the lifetime of the table then a type larger than 32-bit is worth considering despite the extra storage needed in data and index pages. I'd not criticise considering it for tens of millions if you want to be more paranoid. Otherwise: 32-bit is very unlikely to not be fine, and 64-bit overkill.

Large keys are not a problem. See the SQLite3 docs about WITHOUT ROWID tables. Essentially any interesting multi-column key will be large, so don't worry about it. On the other hand, while INTEGER PRIMARY KEYs are nice and small, you end up with one extra index in SQLite3 if you have an integer key and also some other key that is really your primary key (or equivalently if you have the primary key you want and you didn't use WITHOUT ROWID).

A covering index always has large keys too.

In SQLite3 the trade-off made is that B-trees have either integer keys and arbitrary values, or arbitrary keys and integer values. When using WITHOUT ROWID the whole row is the key. (IIRC)

It's very interesting to me that we have to keep telling people this, but it hasn't become part of the "hive folk knowledge" we all seem to develop. I think DB vendors have been sleeping on an opportunity to encourage better practices.

  • DB vendors haven't done enough to offer ID generation as a core part of their system. Ideally "what ID do I use for this object" shouldn't even be a consideration, because of course the database should handle it. It is the system of record after all. Yet your options are pretty much limited to UUID or a basic incremental counter that fails to meet any real world production constraints.

    • Basic incremental counters work for most real world production constraints. Most people are not going to create tables with 4.2 billion rows, even with failed inserts. If you are doing that its an extreme of either very much you know what you are doing, or you very much do not; I have seen both in production.

      6 replies →

  • Because we keep telling people the opposite in academic settings, where the pkey is usually some actual data field(s) you expect to be unique. There's some point to teaching 3NF this way, but it shouldn't be taken literally.

There are cases where it is really useful to have only universally unique IDs, e.g. if you have multi-tenant systems and at some point you need to move tenants to a different server/instance or merge tenants on the same DB.

  • they do mention that it’s is good to have both. also using a newer uuid version that is more sortable temporally is also wise.

    > Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!

Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.

In fact, you could use 512bits primary key with MySQL and still, range queries would be 10x faster and 10x less space in ram than in Postgres. This article explains why that is.

  • Clustered tables and sequential keys have their own downsides though like lock contention on the "last" page.

    • Not really a problem in practice.

      Most clustered tables don’t have a single last page. For ex it’s common to order a table by (used_id,pk) so a users data is grouped together. Dropbox did this

      For the ones which do have a single last page, it’s easy to remove that, you don’t have to use sequential IDs. Use a uuid.

      Basically this problem only happens when you cluster globally by mistake. just don’t make that mistake.

  • > Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.

    Why do you believe a database should order rows on disk?

    • In Postgres every read happens in increments of 8kb

      If your rows are not ordered on disk, the amount of data you need to load to query 100 rows is insane

      10kb query result (100 rows 100 bytes) requires almost 1mb of data to be loaded- it’s 99% waste

      Postgres is 100x slower than other DBs for range queries

      Every single other database in existence has this feature except for Postgres