Ask HN: We just had an actual UUID v4 collision...

1 day ago

I know what you're thinking... and I still can't believe it, but...

This morning, our database flagged a duplicate UUID (v4). I checked, thinking it may have been a double-insert bug or something, but no.

The original UUID was from a record added in 2025 (about a year ago), and today the system inserted a new document with a fresh UUIDv4 and it came up with the exact same one:

b6133fd6-70fe-4fe3-bed6-8ca8fc9386cd

We're using this: https://www.npmjs.com/package/uuid

I thought this is technically impossible, and it will never happen, and since we're not modifying the UUIDs in any way, I really wonder how that.... is possible!? We're literally only calling:

import { v4 as uuidv4 } from "uuid";

const document_id = uuidv4();

... and then insert into the database, that's it.

Additionally, the database only has about 15.000 records, and now one collision. Statistically... impossible.

Has that ever happened to anyone?! What in the...

This is surprisingly common.

The security of UUIDv4 is based on the assumption of a high-quality entropy source. This assumption is invalidated by hardware defects, normal software bugs, and developers not understanding what "high-quality entropy" actually means and that it is required for UUIDv4 to work as advertised.

It is relatively expensive to detect when an entropy source is broken, so almost no one ever does. They find out when a collision happens, like you just did.

UUIDv4 is explicitly forbidden for a lot of high-assurance and high-reliability software systems for this reason.

  • This is why CloudFlare has done what they did with the lava lamp wall. Not that the wall is such a great source of entropy on its own - I'm sure it's not their only source, but you can never have too many sources of entropy - but it makes it visible in a way that can grab those who don't fully understand the concepts of RNGs and how entropy plays into that.

    The more sources of entropy, the more closely you approach "perfect" randomization. And a large chunk of those entropy sources need to be non-deterministic. Even on the small level, local applications running on local systems, like games, can use things like the mouse coordinates, the timings between button presses, the exact frame count since game start before the player presses Start to greatly enhance randomness while still using PRNGs under the hood

    Yes, for the latter, that's technically deterministic (and the older the game considered, the more deterministic it is, see TAS runs of old games obliterating the "RNG"). But when you have fifty different parameters feeding into the initial seed, that's fifty things an attack would have to perfectly predict or replay (and there are other ways to avoid replay attacks that can be layered on top)

    If CloudFlare had less than 100 different sources of entropy, I'd be disappointed. And that's assuming their algorithm for blending those entropy sources into a single seed value is good

    • > you can never have too many sources of entropy

      This is so true. And the beauty is that with algorithms, we don't even need to know much about the entropy to be able to extract it.

      There is the Von Neumann method of generating an unbiased coin from a biased coin. Of throwing it twice, and checking if you got HT or TH. And completely discarding all HH or TT results. It doesn't matter if the coin you are using is 20% or 80%, the result will be a true 50/50.

      There are more modern algorithms that can be even better (in that they need less coin tosses if you have a very unbalanced coin).

      And then there is modern cryptographic hashing. Feed it all the bits you can. Collisions end up only happening in the real world if every single one of those bits is identical. So if you have actual entropy being fed, that cannot be controlled, predicted, or replicated, modern cryptography tells you that the end result is unique.

      5 replies →

    • > This is why CloudFlare has done what they did with the lava lamp wall.

      Interesting. I wonder how true it actually is that they use it like they claim here: https://www.cloudflare.com/learning/ssl/lava-lamp-encryption.... It's in one of their lobbies, so doesn't that make it susceptible to an attack in some way? I'm not knowledgeable enough to know, but I figured if they actually used that method, they'd have a more controlled environment.

      I also don't fully understand it. A large part of that wall is static. And the camera isn't going to pick up on the stochastic properties of the lava as much as exists in the real world. So it feels like their images will be very statistically similar.

    • The lava lamps are just for show.

      You can get entropy just by plugging an oscilloscope into a pile of dirt and cranking the gain up.

      1 reply →

  • Yep - I've seen legitimate-looking dups on bad hardware, and "there are a ton of trailing zeros" is also an incredibly common duplicate mode for some UUID libraries (like earlier Go ones that didn't validate the "requested N bytes, returned 3, you must re-request to get N-3 more" return values. it doesn't happen on most hardware or OSes, so people never check it, so it just comes up in production some day with tens of thousands of collisions).

  • Thanks for the insight! Mind expanding on what alternatives are being used in high reliability systems instead of UUIDv4?

    • In high-reliability systems a criterion for identifier design is easy detection of defective identifiers. This includes buggy systems and adversarial manipulation.

      The problem with UUIDs that rely on entropy sources is that it is computationally expensive to detect if the statistical distribution of identifiers is diverging from what you would expect from a random oracle. I've written systems that can detect entropy source anomalies but you'll want to turn it off in production.

      It is pretty cheap to sanity check most non-probabilistic identifier schemes. UUIDs that use broken hash algorithms (e.g. UUIDv3/5) or leak state (e.g. UUIDv7) are exposed to adversarial exploitation.

      The identifier scheme is dependent on the use case. Does the uniqueness constraint apply to the instance of the object or the contents of the object? Is the generation of identifiers federated across untrusted nodes? How large is the potential universe of identifiers?

      The basic scheme I've seen is a 128-bit structured value that has no probabilistic component. These identifiers can be encrypted with AES-128 when exported to the public, guaranteeing uniqueness while leaking no internal state. The benefit of this scheme is that it is usually drop-in compatible with standard UUID even though it is technically not a UUID and the internal structure can carry useful metadata about the identifier if you can decrypt it.

      Federated generation across untrusted nodes requires a more complex scheme, particularly if the universe of identifiers is extremely large. These intrinsically have a collision risk regardless of how the identifiers are generated.

      All of the standardized UUID really weren't designed with the requirements of scalable high-reliability systems in mind. They were optimized for convenience and expedience which is a perfectly reasonable objective. Most people don't need an identifier system engineered for extreme reliability, even though there is relatively little cost to having one.

      5 replies →

    • The latest UUID (7?) Uses half random gen, half timestamp. This not only makes it sortable by creation, but would also make a collision like this impossible.

      16 replies →

  • How is UUIDv4 to blame for a broken source of entropy? Or am I misinterpreting your words?

    • I wouldn't say it's "to blame", but it is more susceptible to bad RNG.

      If the RNG is bad, you'll get more benefit from adding non-random bits than you would from additional badly RNG'd bits.

      The probability of future collisions also rises the more IDs you generate. If you incorporate non-random bits, you can alleviate that:

      - timestamps make the collision probability not grow over time as you accumulate more existing UUIDs that could collide

      - known-distinct machine IDs make the collision probability not grow as you add more machines

    • I never blamed UUIDv4 for broken entropy sources. A broken entropy source breaks UUIDv4 even if you are using it correctly.

      There is a long history of broken entropy sources showing up in real systems. No matter how hard people try to prevent this it keeps happening. Consequently, a requirement for high-quality entropy sources is correctly viewed as an unnecessary and avoidable foot-gun in high-reliability software systems.

  • For a while we’ve been fixing telemetry-reported crash bugs in the project I maintain, and now hardware bugs are showing up with some frequency. I was amazed how common they are. Sometimes data values (e.g. SP register) are corrupted, but other times even infallible operations (e.g loads of rodata constants) crash, indicating that the instruction itself was corrupted. So, yeah, I believe you’ll eventually see UUID collisions, but not because the underlying cryptanalysis was wrong.

  • > UUIDv4 is explicitly forbidden for a lot of high-assurance and high-reliability software systems for this reason.

    Hmm. What do those systems do for cryptography? Just assume it won't work and not rely on it at all?

    • In these kinds of systems the cryptographic components often aren't even accessible from the software. It isn't a thing you need to worry about.

      This makes it easier to audit for use of entropy sources in the software since there really isn't a valid use case for it.

  • Super simple to detect and try again.

    • A collision is simple to detect but it requires you to actually check, which is expensive at scale. The entire point of UUIDv4 is that you don't have to check for collisions because it should never happen. But if you don't check and it does happen you are in UB territory which is generally very bad.

      A risk of collision before it happens is non-trivial to detect but this is really what you'd want.

      11 replies →

  • Reading the UUID spec leads me to believe that good entropy is not even a requirement for any version:

    > Implementations SHOULD utilize a cryptographically secure pseudorandom number generator (CSPRNG) to provide values that are both difficult to predict ("unguessable") and have a low likelihood of collision ("unique").

    From https://www.rfc-editor.org/rfc/rfc9562.html#unguessability

    So I don't think technically we can say entropy or random numbers at all are even "required for UUIDv4 to work as advertised."

Funny story no one will believe, but it’s true. A good friend of mine joined a startup as CTO 10 years ago, high growth phase, maybe 200 devs… In his first week he discovered the company had a microservice for generating new UUIDs. One endpoint with its own dedicated team of 3 engineers …including a database guy (the plot thickens). Other teams were instructed to call this service every time they needed a new ‘safe’ UUID. My pal asked wtf. It turned out this service had its own DB to store every previously issued UUID. Requests were handled as follows: it would generate a UUID, then ‘validate’ it by checking its own database to ensure the newly generated UUID didn’t match any previously generated UUIDs, then insert it, then return it to the client. Peace of mind I guess. The team had its own kanban board and sprints.

  • > One endpoint with its own dedicated team of 3 engineers

    > The team had its own kanban board and sprints.

    My early jobs were at startups startups with limited resources. Every decision to build something or hire someone was carefully made after much consideration. This story would have looked like fiction to me at the time.

    Later in my career I joined a startup like this where every new concern someone could think up turned into a new microservice with new hires to form a new team. It didn't matter how small it was, everything was a reason to hire new people and form a new team. I sat in meetings where the express goal of the quarter was communicated as growing the engineering team.

    It was as weird time. We had this same situation where there were 3-4 person teams who had their own sprints and planning sessions where they would come up with more ways to make work for themselves. Some of them moved so slow that they could spend entire sprints doing tiny changes. Others were working on the most over-engineered solutions you'd ever seen for trivial problems.

    There was one meeting where I suggested we re-assign some people on a stable project to work on something that we needed urgently, but I got shut down. That would have removed another excuse to hire more people, which would have conflicted with someone's KPIs to grow the engineering team to a specific number

    • > My early jobs were at startups startups with limited resources. Every decision to build something or hire someone was carefully made after much consideration. This story would have looked like fiction to me at the time.

      This was pre-2015

      > Later in my career I joined a startup like this where every new concern someone could think up turned into a new microservice with new hires to form a new team. It didn't matter how small it was, everything was a reason to hire new people and form a new team. I sat in meetings where the express goal of the quarter was communicated as growing the engineering team.

      This was post-2015

      ---

      Am I right?

      You're describing exactly what I've tried to express in various comments. There was a point in the latter half of the 2010s when it became genuinely hard to find tech work where you were building useful stuff. Startups become increasingly absurd and the focuses of their engineering teams even more so.

      In 2019 I was working for a company who were so desperate to hire new engineers at one point they decided to just start offering jobs to candidates which failed interviews. It was absolutely insane.

      2 replies →

    • > someone's KPIs to grow the engineering team to a specific number

      Sigh!

      Specific numbers!

      I believe a more common specific number is the yearly EBITDA or ARR (or some other acronyms in this alley I care zero about to memorize) nowadays, for investor's sake. Like in our company. Since we were acquired - and some time before - the only talk in company meetings are EBITDA, ARR, compared to a number dreamed up by someone and to be reached in 5 years time. Specific financial results in specific timeframe. Our goals are specific numbers being above today's numbers by a chosen margin. The company talk are marketing campaigns and reach, campaign efficiency measurements, pricing strategies, subscription centric licensing, sales strategies, churn, and other slang around customer bullying I also do not care about, also organizational streamlining - what a loaded word! -, bla bla bla, all for the specific sacred number put up on the pedestal.

      What we have zero talk about? Functionality, engineering.

      I seriously do not understand these people. Why are they fiddling around with selling software in a niche sensitive to global economic fluctuations insted of selling ... I don't know. Shoes? Or better yet sugary water ... no, better is vitamin water ... no, the trendiest is protein water. That is something that needs no balanced functionality and engineering that is laborous so it is resource intensive to achieve. And is in the way of reaching the sacred number put up there. Engineers are in the way towards our goals. We are pulling back the cart! We are cost center now!!

      I do not stay long.

  • At some point someone optimizes the system to a global company-wide incrementing 128 bit counter. Instead of needing a costly database lookup against a growing database the microservice just fetches the current counter, increments it by one and hands out the new value. Easy, fast O(1) operation.

    This even allows you to shard the service to provide high availability and distribute the service globally to reduce latency. Just give each instance a dedicated id range it can hand out. I'd suggest reserving some of the high bits to indicate data center id, and a couple more bits for id-generator instance within that dc.

    Wait a second, this starts to look familiar ... does Twitter still do that, or did they eventually switch?

    • Define a random 128 bit key that you will never change. Use that key to encrypt 128 bit integers in sequence using AES-128, each one comes out as a, for all practical purposes, unique unpredictable ID.

      2 replies →

    • Twitter snowflakes haven't changed. Most of the bits go to the timestamp, which I guess is a global incrementing counter as you described

    • > At some point someone optimizes the system to a global company-wide incrementing 128 bit counter.

      Some UUID versions include time, so there's a bit of a counter in that.

      1 reply →

  • I've seen similar, buried deep within a major SV tech co.

    Their process was a bit more complex because the master list of in-use UUIDs was stored in an external CMDB service run by a different department. They got a daily dump of that db, so were able to check that when generating a "provisional" id. Only once it had been properly submitted to the CMDB did it became "confirmed".

    They had guardrails in place to prevent "provisional" ids being used in production, and a process for recycling unused "confirmed" ids. Oh, and they did regular audits which were taken very seriously by management.

    Last I heard, they were 18 months into a 6 month project to move their local database cache to Zookeeper...

  • I can believe it, and I often wondered "can I win the UUID misfortune lottery" I wonder if this is equally common with Microsoft's flavor aka GUIDs.

    • GUIDs are UUIDs are effectively the same thing... the issues often come down the the means of generation and storage... where UUID have versions with specific implementation details that aren't always followed, MS has internal implementations that also aren't always followed. Also worth being aware of are COMB, SequencialIDs (MS-SQL) and other serialization approaches as well as how they affect indexes in practice.

      Alternatives include sequencial number generator services, or sequence services that may be entirely sequencial, etc, but may lead to out of order inserts in practice.

      Also, generally worth considering UUIDv7 assuming your sotrage and indexing use the time portion at the front of the index process.

  • I get the microservice to ensure this. But 3 people dedicated to it? I guarantee you they spent their days trudging dungeons, playing CoD and ping pong.

    • You need at least 3 for this. People go on vacation, turnover, can’t risk losing that critical institutional knowledge.

  • At one of my previous jobs, there was a function `createEntityWithRandomUUID` which would basically do the same thing as a light wrapper around database inserts. If a conflict occurred, it would generate a new ID and try again, up to 5 times I think. No logging to indicate whether any conflict actually ever happened.

  • I'd believe it.

    What I'd find harder to believe is that it wasn't really a table with more information than just "list of assigned UUIDs". I'd be really surprised (pleasantly!) if it was only that. I'd figure most startups would make sure that table links to customer info so that they know which customer has a specific UUID, for easy searching and crossreferencing with the main db

    • That sort of table can be quite handy when every entity in the business's data stew is identified with a UUID, and there is no way of telling just from looking at an identifier what kind of entity it is. Particularly when the business has disparate databases and/or microservices with their own sets of UUIDs.

      In such businesses, inevitably, someone will ask you to run process X for widget 8dbcd950-14c1-4877-a8b0-90c081ce033c, and that particular identifier will actually be an ID of some associated data, not the widget. You can push back and say, "That isn't a widget identifier, can you please look up the widget identifier?" It's better to be able to look that ID up in your ID ⮕ entity type lookup table, and say "the ID you provided is a widget production run ID, which produced a copy of widget a84969be-137a-41ca-97c4-515497184df9. Can you confirm this is the widget you need process X done for?", with a link to the product-facing widget page.

      (Also handy for the case where some code was intended to log an ID for one entity, but actually logs the ID for an associated entity with the wrong entity type indicated.)

  • Who has the balls to form that team? Were they disbanded?

    • I will gladly assume that this team was formed after several collisions with UUID's my assumption is that they had tremendous amount of data and enough revenue to justify all of this at least financially. I would have re-evaluated the UUID version used or if adopting Snowflakes would be better at some point.

  • You would think they could automate the entire process by “creating-ahead” a certain number of UUID values in the DB, storing them in memory to reduce DB latency, and then recording the assignment to the DB once it had been assigned.

    And the microservice could easily be crafted to only accept assignment requests from other known endpoints.

This is usually caused by an insufficently seeded PRNG.

Are you generating the UUID in the backend, or the frontend? Frontend is fundamentally unreliable for many reasons, including deliberate collisions. So if that case you'll need to handle collisions somehow. Though you can still engineer around common sources of collisions, the specifics depend on the environment.

On the other hand making a backend reliable is feasible. What kind of environment is your code running in? Historically VMs sometimes suffered from this problem, though this should be solved nowadays. Heavily sandboxed processes might still run into this, if the RNG library uses an unsafe fallback. Forking processes or VMs can cause state duplication and thus collisions.

  • I remember hearing about Segment (analytics company) had their entire product based around UUIDs generated in web browsers. There were collisions all over the place, the product was seemingly incapable of producing useful data at a fundamental level because of it. Hopefully they've fixed that now.

This reminds me of a passage from the book "Pro Git".

<https://git-scm.com/book/en/v2>

"Here’s an example to give you an idea of what it would take to get a SHA-1 collision. If all 6.5 billion humans on Earth were programming, and every second, each one was producing code that was the equivalent of the entire Linux kernel history (6.5 million Git objects) and pushing it into one enormous Git repository, it would take roughly 2 years until that repository contained enough objects to have a 50% probability of a single SHA-1 object collision. Thus, an organic SHA-1 collision is less likely than every member of your programming team being attacked and killed by wolves in unrelated incidents on the same night."

Deliberate collisions are addressed in the following paragraph.

SHA-1 hashes are not random, so the issue of poor pseudo-random number generation doesn't apply as it does to uuidv4. And SHA-1 hashes are 160 bits, vs. 128 for uuidv4.

But I love the idea of unrelated wolf attacks.

  • Reminds me of this page with an example for understanding how many permutations there are for a shuffled deck of cards: https://czep.net/weblog/52cards.html

    > So, just how large is it? Let's try to wrap our puny human brains around the magnitude of this number with a fun little theoretical exercise. Start a timer that will count down the number of seconds from 52! to 0. We're going to see how much fun we can have before the timer counts down all the way. Shall we play a game?

    > Start by picking your favorite spot on the equator. You're going to walk around the world along the equator, but take a very leisurely pace of one step every billion years. The equatorial circumference of the Earth is 40,075,017 meters. Make sure to pack a deck of playing cards, so you can get in a few trillion hands of solitaire between steps. After you complete your round the world trip, remove one drop of water from the Pacific Ocean. Now do the same thing again: walk around the world at one billion years per step, removing one drop of water from the Pacific Ocean each time you circle the globe. The Pacific Ocean contains 707.6 million cubic kilometers of water. Continue until the ocean is empty. When it is, take one sheet of paper and place it flat on the ground. Now, fill the ocean back up and start the entire process all over again, adding a sheet of paper to the stack each time you’ve emptied the ocean. Do this until the stack of paper reaches from the Earth to the Sun. Take a glance at the timer, you will see that the three left-most digits haven’t even changed. You still have 8.063e67 more seconds to go. 1 Astronomical Unit, the distance from the Earth to the Sun, is defined as 149,597,870.691 kilometers. So, take the stack of papers down and do it all over again. One thousand times more. Unfortunately, that still won’t do it. There are still more than 5.385e67 seconds remaining. You’re just about a third of the way done.

    • Damn, I got the paper stack wet with all that ocean water. Guess I'm starting again from scratch...

  • On the other hand, it turns out that pre-image attacks are quite feasible, and as several people who have thoughtlessly committed the pre-image attack test case files to git can attest… quite problematic

What you're talking about is so extremely rare that it's much more likely that the entire Earth is destroyed by an asteroid right this inst...

  • It is not quite as rare. I calculated it to be less common than being hit by a meteorite, and added a section about that and the Birthday Paradox to Wikipedia, to the article about UUIDs. It got removed / replaced a few years ago however. (If my source was correct, there was actually a woman hit by a meteorite, but she survived, with a leg injury.)

    If you do have a UUID collision, chances are extremely high that it's either a software bug, or glitch in the computer. It could be a cosmic ray. Cosmic rays messing with the computer memory or CPU are actually relatively common.

  • About as rare as an asteroid typing an ellipsis and clicking the add comment button.

    • That’s just a result of jounce from localized gravity effects and atmospheric pressure disturbances in the moments before impact.

      Think the ultrasonic typing hacking scene in Pantheon combined with the keyboard bouncing due to rumbling.

  • It's very common if you improperly seed, as others in the thread brought up! Or in your framing, as rare as earth getting hit if it were surrounded by a sci-fi density asteroid field.

  • Well it would be statistically even rarer for that UUID collision to happen and the earth to be destroyed by an asteroid.

  • For a single database using UUIDs, yes, it's astronomically rare. But it's quite a different thing to say that no computer system on Earth has ever experienced a UUID collision. The number of systems out there is also astronomical.

Something off on how the RNG is initialized? Lack of entropy?

If the rng is not customized it will use:

    const rnds8 = new Uint8Array(16);
    export default function rng() {
        return crypto.getRandomValues(rnds8);
    }

getRandomValues doesn't specify a minimum amount of entropy.

  • It's a near certainty that something is badly wrong with the RNG, and, yes, probably in how it's seeded.

    It's probably messing up the cryptography, too.

    • But defaults should be sane and safe. RNG isn't the sort of thing you want to be messing up. Every JS dev was taught that Math.random is not safe by default, but the crypto package is.

According to the many-worlds interpretation of quantum mechanics, there's bound to be one branch of universe where every UUID is the same. Can you imagine what those guys are thinking?

  • Not only that, there's vastly more where every UUID except one is the same, but they never got to that one because they didn't ever use them.

    Or where the first two are unique, but every following one is one of the first two.

All the comments I've been able to read are missing the elephant in the room: no high-quality entropy source can turn a "should" into a "must".

If you want something that is difficult to guess, ask the cryptography guys. But if you need something that is -_guaranteed_ unique, you must build it yourself.

I fully agree. It makes no sense. Yet...

The only guesses I'm having is that we originally generated UUIDv4s on a user's phone before sending it to the database, and the UUID generated this morning that collided was created on an Ubuntu server.

I don't fully know how UUIDv4s are generated and what (if anything) about the machine it's being generated on is part of the algorithm, but that's really the only change I can think of, that it used to generated on-device by users, and for many months now, has moved to being generated on server.

  • You let users generate a UUID?

    To be honest, the chance that you are doing something weird is probably higher than you experiencing a real UUID conflict.

    How did your database 'flag' that conflict?

    • user-generated (as in: on the user's phone) was only at the very early stages of this product, and we've since moved to on-server. It's a cash-register type of app, where the same invoice must not be stored twice. So we used to generate a fresh invoice_id (uuidv4) on the user's device for each new invoice, and a double-send of that would automatically be flagged server-side (same id twice). This has since moved on to a server-only mechanism.

      The database flagged it simply by having a UNIQUE key on the invoice_id column. First entry was from 2025, second entry from today.

      3 replies →

    • If it's UUIDv4 and you validate that the UUID is valid and not conflicting I don't really see the issue with user-generated UUIDs. Being able to generate unique keys in an uncoordinated manner is the main selling point of UUIDs

      Sure, it's something I'd flag in any design to spend two minutes to talk about potential security implications. But usually there aren't any

      2 replies →

    • Likely a unique index... duplicate insert on a primary or 1:! foreign key. I am currently shimming out a process that will add a trackingid for a job service, and just had my method stub retorn Guid.Empty... second time I ran my local test it blew up on the duplicate key... then I switched it to null, then it blew up again... I neglected to exclude null from the unique index on the foreign key.

      In any case, it's easy enough to do. I mostly use UUDv7, COMB or NEWSEQUENTIALID ids myself though.

    • The smart way would be to check if the id is in use, and generate a new one... Repeat a few times if you're extremely unlucky, and bail out with an error if you have the absolute worst rng. It also works for locally generated ids as well.

  • If it was two on-device generated UUIDs I could see a collision happening. There have been instances of cheap end devices not properly seeding their random number generators, leading to colliding "random" values. And cases of libraries using cheap RNGs instead of a proper cryptographic RNG, making it even worse

    But on a server that shouldn't happen, especially not in 2026 (in the past, seeding the rngs of VMs used to be a bit of an issue). Even if one UUID was badly generated, a truly random UUID statistically shouldn't collide with it. You'd need an issue in both generators

    • The library is using node:crypto, but with a phone target, that's likely shimmed with a JS implementation...

  • The UUIDv4 collision is statistically extremely unlikely. What is more likely is both systems used the same seed. This might be just a handful of bytes, increasing the chance of collision to one in billions or even millions.

    • The shim for node:crypto in the browser is likely a weaker implementation in JS than the node implementation... you can cheat and use the browser itself to get a UUIDv4...

          function uuid4() {
            var temp_url = URL.createObjectURL(new Blob());
            var uuid = temp_url.toString();
            URL.revokeObjectURL(temp_url);
            return uuid.split(/[:\/]/g).pop().toLowerCase(); // remove prefixes
         }

  • Better check what crypto.js is actually doing in your exact setup. Weak polyfills exist...

Good moment to revisit this fun article: https://jasonfantl.com/posts/Universal-Unique-IDs/

If the entire universe were turned into a giant computer and did nothing but generate uuids until its heat death, how many bits would you need for the ID space?

Are your UUIDs generated client side or server side? If it's client side, it could be due to a crawling bot. Googlebot for example executes Javascript using deterministic "randomness".

  • Yeah, the answer almost certainly has to be this, or that they were using an old version of the package which didn't use the system RNG correctly (the current version appears to do it correctly, but I didn't dive into older versions), or their project has loaded an old broken polyfill re-implementing the JS crypto API, or they were running this on a hosting setup that does something jank like resuming the same VM snapshot with its RNG state on multiple servers. This category of explanation is many orders of magnitude more likely than a true random collision.

Gotta be a seeding issue. If it's not, and you can prove it, you're about to be a little famous probably :P

It's not happening by chance, there is a bug somewhere.

From what I skimmed the package should just call to the js runtime's crypto.randomUUID(). I think it should always be properly seeded.

I think it is extremely unlikely that the runtime has a bug here, but who knows? What js runtime do you use?

Most plausible cause: uuid package depends on some random number generator package, which has recently been compromised in order to make “random” numbers predictable. As a result, many crypto (ssl + currency) projects are compromised due to a supplychain attack.

  • Changed 3 weeks ago:

    uuid/src/rng.ts : the random array is const. Every call will share the same random number. Subsequent call will update your old random code, so if you generated something important... good luck

    The old code used to do a slice() which creates a new copy.

    Might be unintentional. Although I have no idea how this would pass any tests, as you would think to test generating 2 randomnumbers and hope they are not the same.

    • Didn't actually want to write a test myself.. but I miss Claudia confirmed it. Pretty concearning.

      Synchronous / serial calls:

         import rng from './rng';
         
         const a = rng();
         console.log('a after first call: ', Array.from(a));
         
         const b = rng();
         console.log('a after second call:', Array.from(a));
         console.log('b after second call:', Array.from(b));
         
         console.log('a === b (same reference)?    ', a === b);
         console.log('a equals b (same contents)?  ', a.every((v, i) => v === b[i]));
      
      

      output:

         a after first call:  [
           101, 193, 125,  19, 142,
           136, 181, 140, 209, 224,
           176, 153, 179, 248, 246,
           166
         ]
         a after second call: [
             4,  29, 48, 215, 162,  60,
            64,  23, 78, 137,   2, 186,
           230, 249, 70, 224
         ]
         b after second call: [
             4,  29, 48, 215, 162,  60,
            64,  23, 78, 137,   2, 186,
           230, 249, 70, 224
         ]
         a === b (same reference)?     true
         a equals b (same contents)?   true
         
      

      and aynchronous calls:

         import rng from './rng';
         
         async function getId() {
            const bytes = rng();
            await new Promise(r => setTimeout(r, 0)); // yield to the event loop
            return Array.from(bytes);
         }
         
         const [id1, id2] = await Promise.all([getId(), getId()]);
         console.log('id1:', id1);
         console.log('id2:', id2);
         console.log('identical?', id1.every((v, i) => v === id2[i]));
      
      

      output:

         id1 captured:  [
            61, 116, 151,  35, 153,
            75, 105,  15,  59, 235,
           162, 215, 224, 115,  31,
           122
         ]
         id2 captured:  [
            13,  3,  84,  28, 22, 176,
           160, 70,  67, 246,  1,  37,
            38, 61, 171,  23
         ]
         id1 after await: [
            13,  3,  84,  28, 22, 176,
           160, 70,  67, 246,  1,  37,
            38, 61, 171,  23
         ]
         id2 after await: [
            13,  3,  84,  28, 22, 176,
           160, 70,  67, 246,  1,  37,
            38, 61, 171,  23
         ]
         ---
         final id1: [
            13,  3,  84,  28, 22, 176,
           160, 70,  67, 246,  1,  37,
            38, 61, 171,  23
         ]
         final id2: [
            13,  3,  84,  28, 22, 176,
           160, 70,  67, 246,  1,  37,
            38, 61, 171,  23
         ]
         identical? true

      2 replies →

I had dup uuids causing soak test failures in a Linux based distributed system. After long investigation it turned out there was a kernel bug (race condition) that meant two processes on MP system reading from /dev/random at the same could (very rarely, like 1 in a million) get the same bytes when reading the device.

I'd look at rng initialisation first.

1 in 4.72 × 10²⁸

1 in 47.3 octillion.

i'd be suspecting a race condition or some other naive mistake, otherwise id be stocking up on lottery tickets.

(lol at the other user posting at the same time about the lottery ticket.. great minds and all that.)

  • I've always looked at it the the other way - being that lucky would mean you have even less chance of something else lucky happening, good time to save your money

  • The lottery ticket part makes no sense. Statistically if such an improbable event just happened to him, then chance of it happening again should be even more improbable.

One of the most dangerous words in engineering is “statisticaly impossible” At enough scale edge cases stop to be theoretical and start become production events.

Please, do not use b6133fd6-70fe-4fe3-bed6-8ca8fc9386cd, I checked my database and I was using it already.

  • I always thought generating UUIDs at random was insane. I now only use LLMs. The prompt is: "generate a UUID. Make sure no one ever used it anywhere in their code or database. Check your work and think hard about each step. Do not output any reasoning or plain English, only th UUID itself".

    You're welcome.

    • Actually asking ChatGPT this query led it giving me this UUID "550e8400-e29b-41d4-a716-446655440000" which happens to be a very common example UUID

      5 replies →

  • I knew it, we're all getting the same cheap UUIDs and the good ones are reserved for the big dogs.

    • uuid.uuidv4() recently switched to "adaptive entropy" instead of "xmax entropy" in an effort to save costs on non-premium users.

  • I'm using 16b55183-1697-496e-bc8a-854eb9aae0f3 and probably some more too. I suppose if we all post our list here, then we can all check for duplicates?

    • We should all send our already-generated UUIDs to a shared database, we could just put it on Supabase with a shared username/password posted on HN, so we can all ensure that after generating a UUIDv4 locally, it's not used by anyone else. If it's in the database, we know it's taken.

      It's a super simple mechanism, check in common worldwide UUID database, if not in there, you can use it. Perhaps if we use a START TRANSACTION, we could ensure it's not taken as we insert. But that's all easy, I'll ask Claude to wire it up, no problem.

      1 reply →

Multiple times have I blamed compilers, cosmic rays, quantum effects, or at the very least an obscure kernel bug, before realizing that I was the source of a bug.

A collision at 15,000 records is so unlikely that I would first suspect something else. Duplicate processing, replayed requests, reused objects, misleading logs, or another code path reusing the identifier.

Could you share a bit more of the surrounding code so we can check?

Is the uuid generated in the frontend or backend? If frontend, I’d wager the likeliest explanation is that the client code or request was messed with to inject a previously known uuid rather than an entropy issue.

> Duplicate UUIDs (Googlebot)

> This module may generate duplicate UUIDs when run in clients with deterministic random number generators, such as Googlebot crawlers. This can cause problems for apps that expect client-generated UUIDs to always be unique. Developers should be prepared for this and have a strategy for dealing with possible collisions, such as:

> - Check for duplicate UUIDs, fail gracefully

> - Disable write operations for Googlebot clients

https://github.com/uuidjs/uuid/commit/91805f665c38b691ac2cbd...

Glad to be reading the comments here because I also had this happen to me once and thought I must have been going insane.

Ultimately it comes down to your entropy source. I always generate and insert in a loop for this reason, if there is a collision, I therefore handle that gracefully.

> I thought this is technically impossible

No, very technically possible... though, with good randomness, very, very unlikely.

But nothing technically prevents a UUIDv4 from generating a duplicate value.

Or there is some other explanation, eg. somebody messed with the request manually, or with the db.

Just a stupid question, but why not append the date, even in seconds as hex. It's just a few bytes and would guarantee that everything OK now will be OK in the future?

  • You can just use a different UUID variant which includes timestamp data instead (e.g. v1 or v7), there are also variants which include the MAC address.

  • yeah, any sort of additional semi-random data could've helped prevent this, I'm sure. That, however, is also kind of the idea of UUIDv4, it has lots of randomness and time built in already.

A check inside the generator function is the best way I've found to avoid this. Wrap uuid or whatever random generator with a check against an ID cache. If it already exists, just run the generator recursively.

> I thought this is technically impossible

Actually it's not impossible, but very very improbable.

P.S. You should play a lottery/powerball ticket

P.P.S. Whenever I use the word improbable, the https://hitchhikers.fandom.com/wiki/Infinite_Improbability_D... comes in mind

  • > P.S. You should play a lottery/powerball ticket

    Actually, they should not. That collision and winning the lottery would be even rarer.

    • Assuming they are independent events, OP is not more nor less likely to win the lottery now that before running in the collision. I actually have more question if you claim the events in question are NOT independent!

> I thought this is technically impossible, and it will never happen,

In an eternal universe, even the most unlikely of events will happen an infinite number of times.

Would the UUID v7 be more collision proof? Hard to say because it takes time into account but then the number of entropy bits are reduced hence the UUID generated exactly at the same time have more chance of a collusion because number of entropy bits are a much smaller space hence could result in collusions more easily.

Thoughts?

  • UUID v7 relies on knowing what time it is.

    Speculation: The most likely scenario for a UUID v7 collision is if UUIDs are generated during a system boot sequence, before the system clock is set to the current time. It's always 1970 somewhere. There are still 62 random bits, and optionally another 12 random bits, but those too could be problematic if the system hasn't generated enough entropy yet.

Meta, but if I had a question like this, I'd likely have asked on Twitter or Reddit first. I'll keep in mind using HN as an alternative Q&A site.

Always let your db generate uuids. On postgres this is easy since v18 it supports uuid v7!

There is no need to set uuids through javascript or node imo

  • There's plenty of reasons to set a unique identifier before database save, or to want a unique identifier that doesn't have a 1-to-1 relationship with your object.

    For example, in the idempotent kafka consumer pattern we set a unique ID in the header of every kafka message at the time of message publishing. We then have our consumers do a quick check of the ID against their data store to see if they have processed the message before or not. This way there is no impact if a consumer sees the same message twice. This allows us more flexibility during rebalancing events or replaying old offsets.

Reminds me of some code I saw running in production. Every time we added a new entry, we were pulling all the UUIDs from this table, generating a new UUID, and checking for collisions up to 10 times.

Fun thing about random is that these things happen. UUIDv7 is less prone to this as it includes both a time component and random. I’ve been using ULID in a few project which has similar attributes to uuidv7 but more space efficient.

This is like one of the hardest things for people to understand. Even the best randomness guarantees fuck all. Entropy-based IDs are collision-resistant not collision-proof.

Although incredibly rare, it's not impossible so probably best to just plan for collisions. A simply retry should suffice. But I agree I feel like something is going on somewhere else ...

It's much more likely that you hit an "impossible bug" due to a bit flip somewhere.

Imagine the database having the old UUID in a memory buffer due to a recent index scan, and a bit flip happened somewhere in the logic which basically copied the old UUID into the memory location of the new UUID, or some buffer addresses got swapped, or the operation which allocated the new UUID received a memory buffer containing the old one, and due to a bit flip the memcpy operation was skipped, or something along that line.

Facebook wrote extensively about this, stuff like "if (false) {do_x(); )" and do_x being called. For example their critical RocksDB kv store has extensive redundant protections to defend against such "impossible bugs".

Why not to have timestamp-uuid instead ?

  • How confident are you that your machines clocks are in perfect sync? What about the risk of clock drift + correction, or hardware issues?

    • Not GP, but: not confident. How confident would I be to avoid a (slightly lower entropy) UUID collision while also avoiding a clock desync landing on the exact same logged millisecond? Very, which is how confident I was about not encountering an UUID collision before this thread, so very++ I guess.

    • I get why sync of mutiple machines matters for ordering and causality, but why is it a problem for uniqueness?

This is why I prefer to use a random base32 string over UUID. At least you get a proper 128 bit entropy instead of just a 122 bit entropy as with UUIDv4. That's a 64x difference in collision probability. I always thought UUIDs were a toy, not for serious use. If you control the strings, you can even make a longer ID.

Also, numerous applications that use a unique ID per record frequently need to check for ID collisions. I know I do for a short URL generator.

The chance of a UUIDv4 collision is very low, but it is never zero.

If everything is done properly, then this is very likely the one and only time anyone involved in the telling or reading of this account will ever experience this.

  • Classic gamblers fallacy!

    • Ironically one of the few comments in this thread that isn’t necessarily the gamblers fallacy!

      The chance anyone involved saw or heard about the first one was near zero, now they’ve seen this one the chance they see another is still near zero (I.e unchanged).

I lost all confidence in the infallability of software RNG when I was working on an assignment for Data Structures a million years ago (2000?). The assignment was simple: simulate a 2D random walk where you randomly go NSEW, and run 100 cases, collecting stats as to how long it takes to return to the origin.

Super easy assignment, wrote it up probably in C++ (maybe just C?), and ran it on my linux box (probably Debian potato). It finished super quick and gave me an average of like 5.6 steps to return to the origin or something. Cool!

I copied it over to my account on the department's HP-UX machines where I was supposed to run and submit it to my instructor. Compiled fine. And then it... just ran forever. I was doing rand() % 4 or something, and the HP-SUX RNG had crazy bias in its last 2 bits, and it just walked away forever, never returning to the origin. Well crap!

Got an A for my writeup, though!

Almost all pseudo-random number generators are absolute garbage. They need you believe they work because the NSA needs backdoors and to foolproof ransomware attacks. This isn't surprising at all to me.

[flagged]

  • Statistically speaking, does extremely unlikely mean impossible? If it were replicable I'd raise my eyebrow, otherwise it's fair game, no?

    As someone that enjoys the unterminable complaints about RNG in the video game scene, I would never trust any human's rationalization of random outcomes.

    • > Statistically speaking, does extremely unlikely mean impossible?

      No, it means extremely unlikely. Collisions can occur, as op just found out, but the chances are so abysmally small that most people don't care.

      Any application I have worked on, I always had a pre-save check to see if the UUID was already present and generate a new one if it was. Don't think it ever triggered unless a bug was introduced somewhere but good practice anyway.

  • There could be a problem with the way the system generates entropy for randomness.

  • Question to fellow HNers, do you recognize that this comment was written by AI?

    • No, to be honest. However, as soon as it was pointed out, I checked again and it made sense.

      In my opinion, these kind of intuitions have to grow over time. And every time it’s pointed out, you learn. So please, keep pointing it out :).

    • I did not. Post-conditioning by your comment and the other one,I can see some signs such attempting to be unusually comprehensive. The 'atoms in your liver' could be an awkward human trying to be poetic about scales.

      I still don't see idiomatic markers of AI so that's scary if your claim is correct.

    • Interesting enough, I skipped it when scrolling through the comments the first time. I think I instinctually do that to most karma whoring comments, no matter if manual or LLM generated.

      Only noticed it because I did another pass and saw the replies talking about "AI".

    • Yes but as a feeling (hunch?) not as something my brain analysed and reached a conclusion.

      Weird how I'm already somewhat conditioned to spot it on a intuitive level.

    • Kind of. It reads a bit too much like tech support you'd get when asking one for help.

    • when it started going on about all the different cases in the second bullet point... yeah

> We're using this: https://www.npmjs.com/package/uuid

Why? There's a built-in for this.

https://nodejs.org/api/crypto.html#cryptorandomuuidoptions

> I thought this is technically impossible, and it will never happen

I always hated this meme/mindset, because if you dig in to the history of them you'll see that their original purpose was to collide. They were labels to identify messages in Apollo's distributed computing architecture. UID and later UUIDs were a reversible way to mark an intersection point between two dimensions.

Any two nodes in a distributed system would generate the same UID/UUID for the same two inputs, and a recipient of an identified message could reverse the identifier back into the original components. They were designed as labels for ephemeral messages so the two dimensions were time and hardware ID (originally Apollo serial number, later 802.3 hwaddress etc).

I think a lot of the confusion can be traced to the very earliest AEGIS implementation where the Apollo engineers started using “canned” (their term, i.e. static or well-known) UIDs to identify filesystems. Over time the popular usage of UUID fully shifted from ephemeral identifiers where duplicates were intentional toward canned identifiers where duplicates were unwanted and the two dimensions were random-and-also-random.