Comment by PunchyHamster

5 days ago

> Future proofing it by jumping straight to 128 bits instead of 64. 64 would have been fine. Even with a load factor of 1:1000 by assigning semantics to ranges of IP addresses, 64 bit addressing is still enough addresses for 10 million devices per person.

128 bit is like the least of adoption issues and basically meaningless difference vs 64.

But it shows weird priorities when they decided 128 then immediately wasted half of it on host part just to achieve "globally unique" host part that isn't really all that useful characteristic of the protocol.

IP addresses were always meant to be globally reachable. Of course, NAT has corrupted this - which is why NAT is a scourge.

> to achieve "globally unique" host part that isn't really all that useful characteristic of the protocol.

That's the essential part of self-configured addresses in IPv6 that does away with DHCP in most cases. DHCP is a stateful system that has to track every device's addresses individually. You don't need that with IPv6 thanks to this.

  • And yet DHCPv6 is pretty much the standard because you need to push other things into client.

    Need I remind you that option to push DNS server (which is pretty fucking important option!) was added to IPv6 standard only in 2007 ?

    Like, someone decided "yeah have that magical stateless autoconfig thing" and didn't figure out that basic options like DNS, or less common but still VERY useful like the PXE stuff, or NTP server, routes and dozen others DHCP does? (there are security implications too but DHCP wasn't great here too)

    IPv6 in its original format was a joke and stateless configuration is more or less pointless excercise aside from link-local adresses but those could be only exception where stateless runs

    • The NTP server thing was especially egregious given that the transition to everything being under TLS was underway and clocks matter in that situation.

I kinda think we could fix/save IPv6 by taking away almost everything but the 128-bit address extension.

  • The truth is nothing needed fixing, or we wouldn't have been in this position 30 years later

    • Disagree. APINIC got screwed on the IP allocation side, they're the RIR with the largest population but they have a tiny amount of IPs compared to ARIN. India and China have billions of people and not enough v4 space for them. If we go back and reallocate legacy blocks maybe you could make the system work but that would be a big fight with the legacy networks.

      v6 restores the end-to-end principle and reduces network complexity once you go v6 only. Not more NAT traversal problems, no need to deal with STUN/TURN, small networks get even simpler with no need for a statefull DHCP server.

      Sticking with only v4 space also artificially increases the cost of starting new networks and services because you have to buy space from the entrench IP save owners (unless we change the rules are start charging fees to legacy networks and reclaiming unused or poorly utilized space). Those higher barriers to entry hurt innovation and competition.

      So v6 solves several technical and policies issues with the Internet, and maybe that's why we haven't seen speedy adoption. Because people have networks that exist today, some have paid a lot of money for IPv4 space and they want to make the most of that investment.

      They don't really have an incentive to implement V6 unless things start to break without it.

      I don't think v6 has been a failure half of all internet traffic runs on it! It powers the major cell phone networks, and large tech companies like meta have even gone v6 only in their data centers.

      4 replies →