Comment by lisper
4 days ago
This analysis is not quite fair. It takes into account locality (i.e. the speed of light) when designing UUID schemes but not when computing the odds of a collision. Collisions only matter if the colliding UUIDs actually come into causal contact with each other after being generated. So just as you have to take locality into account when designing UUID trees, you also have to take it into account when computing the odds of an actual local collision. So a naive application of the birthday paradox is not applicable because that ignores locality. So an actual fair calculation of the required size of a random UUID is going to be a lot smaller than the ~800 bits the article comes up with. I haven't done the math, but I'd be surprised if the actual answer is more than 256 bits.
(Gotta say here that I love HN. It's one of the very few places where a comment that geeky and pedantic can nonetheless be on point. :-)
Reminds me of a time many years ago when I received a whole case of Intel NICs all with the same MAC address.
It was an interesting couple of days before we figured it out.
How does that happen? Was it an OEM bulk kind of deal where you were expected to write a new MAC for each NIC when deploying them?
There's a fun hypothesis I've read about somewhere, goes something like this:
As the universe expands the gap between galaxies widens until they start "disappearing" as no information can travel anymore between them. Therefore, if we assume that intelligent lifeforms exist out there, it is likely that these will slowly converge to the place in the universe with the highest mass density for survival. IIRC we even know approximately where this is.
This means a sort of "grand meeting of alien advanced cultures" before the heat death. Which in turn also means that previously uncollided UUIDs may start to collide.
Those damned Vogons thrashing all our stats with their gazillion documents. Why do they have a UUID for each xml tag??
It is counter intuitive but information can still travel between places that are so distant that expansion between them is faster than the speed of light. It's just extremely slow (so I still vote for going to the party at the highest density place).
We do see light from galaxies that are receding away from us faster than c. At first the photons going in our direction are moving away from us but as the universe expands over time at some point they find themselves in a region of space that is no longer receding faster than c, and they start approaching.
That's not exactly it. Light gets redshifted instead of slowing down, because light will be measured to be the same speed in all frames of reference. So even though we can't actually observe it yet, light traveling towards us still moves at c.
It's a different story entirely for matter. Causal and reachable are two different things.
Regardless, such extreme redshifting would make communication virtually impossible - but maybe the folks at Blargon 5 have that figured out.
I think I missed something: how do galaxies getting further away (divergence) imply that intelligent species will converge anywhere? It isn’t like one galaxy getting out of range of another on the other side of the universe is going to affect things in a meaningful way…
A galaxy has enough resources to be self-reliant, there’s no need for a species to escape one that is getting too far away from another one.
You'll run out of resources eventually. Moving to the place with the most mass gives you the most time before you run out.
1 reply →
Well eventually there are no galaxies just a bunch of cosmic rays. Some clusters of matter will last longer.
I think for this to work, either life would have to plentiful near the end, or you’d need FTL travel.
Social aspect. There is no need but it's more fun to spend the end of the Universe with other intelligences than each in its own place.
Assuming these are advanced enough aliens, they'll also be bringing with them all the mass they can, to accentuate the effect? I'm imagining things like Niven's ringworld star propulsion.
I think I sense a strange Battle Royale type game…
You must consider both time and locality.
From now until protons decay and matter does not exist anymore is only 10^56 nanoseconds.
If protons decay. There isn't really any reason to believe they're not stable.
And recent DESI data suggests that dark energy is not constant and the universe will experience a big crunch in a little more than double its current age, for a total lifespan of 33 billion years, no need to get wild with the orders of magnitude on years into the future. The infinite expansion to heat death over 10^100 years is looking less likely, 10^11 years should be plenty.
https://www.sciencedaily.com/releases/2026/02/260215225537.h...
3 replies →
Protons can decay because the distinction between matter and energy isn't permanent.
Two quarks inside the proton interact via a massive messenger particle. This exchange flips their identity, turning the proton into a positron and a neutral pion. The pion then immediately converts into gamma rays.
Proton decayed!
3 replies →
Protons (and mass and energy) could also potentially be created. If this happens, the heat death could be avoided.
Conservation of mass and energy is an empirical observation, there is no theoretical basis for it. We just don't know any process we can implement that violates it, but that doesn't mean it doesn't exist.
All of physics is „just“ based on empirical observation. It’s still a pretty good tool for prediction.
Conservation laws result from continuous symmetries in the laws of physics, as proven by Noether's theorem.
1 reply →
That's such an odd way to use units. Why would you do 10^56 * 10^-9 seconds?
This was my thought. Nanoseconds are an eternity. You want to be using Planck units for your worst-case analysis.
11 replies →
Nanoseconds is a natural unit for processors operating around a GHz, as it's roughly the time of a clock cycle.
If a CPU takes 4 cycles to generate a UUID and the CPU runs at 4 GHz it churns out one every nanosecond.
I got a big laugh at the “only” part of that. I do have a sincere question about that number though, isn’t time relative? How would we know that number to be true or consistent? My incredibly naive assumption would be that with less matter time moves faster sort of accelerating; so, as matter “evaporates” the process accelerates and converges on that number (or close it)?
Times for things like "age of the universe" are usually given as "cosmic time" for this reason. If it's about a specific object (e.g. "how long until a day on Earth lasts 25 hours") it's usually given in "proper time" for that object. Other observers/reference frames may perceive time differently, but in the normal relativistic sense rather than a "it all needs to wind itself back up to be equal in the end" sense.
The local reference frame (which is what matters for proton decay) doesn't see an outside world moving slower or faster depending on how much mass is around it to any significant degree until you start adding a lot of mass very close around.
If we think of the many worlds interpretation, how many universes will we be making every time we assign a CCUID to something?
> many worlds interpretation
These are only namespaces. Many worlds can have all the same (many) random numbers and they will never conflict with each other!
We don't "make" universes in the MWI. The universal wavefunction evolves to include all reachable quantum states. It's deterministic, because it encompasses all allowed possibilities.
2 replies →
In that interpretation the total number of worlds does not change.
Proton decay is hypothetical.
So is the need for cosmologically unique IDs. We're having fun.
This is the right critique. The whole article is a fun thought experiment but it massively overestimates the problem by ignoring causality. In practice, UUID collisions only matter within systems that actually talk to each other, and those systems are bounded by light cones. 128 bits is already overkill for anything humans will build in the next thousand years. 256 bits is overkill for anything that could physically exist in this universe.
Ah but if we are considering near-infinitesimal probabilities, we should metagame and consider the very low probability that our understanding of cosmology is flawed and light cones aren’t actually a limiting factor on causal contact.
Sorry, your laptop was produced before FTL was invented, so its MAC address is only recognized in the Milky Way sector.
If we allow FTL information exchange, don't we run into the possibility that the FTL accessible universe is infinite, so unique IDs are fundamnetally not possible? Physics doesn't really do much with this because the observable universe is all that 'exists' in a Russel's Teapot sense.
Would this take into account IDs generated by objects moving at relativistic speeds? It would be a right pain to travel for a year to another planet, arrive 10,000 years late, and have a bunch of id collisions.
I have to confess I have not actually done the math.
Oh no! We should immediately commence work on a new UUID version that addresses this use case.
Maybe the definitions are shifting, but in my experience “on point” is typically an endorsement in the area of “really/precisely good” — so I think what you mean is “on topic” or similar.
Pedantry ftw.
:-)
Don't forget that today's observable universe includes places that will never be able to see us because of the expansion of the universe being faster than the speed of light. There's a smaller sphere for the portion of the universe that we can influence.
Hanson's Grabby Aliens actually fits really well here if you're looking for some math to base off of.
The answer is 42. Have it from good source!