Comment by mike_d
4 days ago
IPv4 isn't perfect, but it was designed to solve a specific set of problems.
IPv6 was designed by political process. Go around the room to each engineer and solve for their pet peeve to in turn rally enough support to move the proposal forward. As a bunch of computer people realized how hard politics were they swore never to do it again and made the address size so laughably large that it was "solved" once and for all.
I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.
My personal preference would have been to open up class E space (240-255.*) and claw back the 6 /8s Amazon is hoarding, be smarter about allocations going forward, and make fees logarithmic based on the number of addresses you hold.
> IPv4 isn't perfect, but it was designed to solve a specific set of problems.
IPv4 was not designed as such, but as an academic exercise. It was an experiment. An experiment that "escape the lab". This is per Vint Cerf:
* https://www.pcmag.com/news/north-america-exhausts-ipv4-addre...
And if you think there wasn't politics in iPv4 you're dead wrong:
* https://spectrum.ieee.org/vint-cerf-mistakes
> IPv6 was designed by political process.
Only if by "political process" you mean a bunch of people got together (physically and virtually) and debated the options and chose what they thought was best. The criteria for choosing IPng were documented:
* https://datatracker.ietf.org/doc/html/rfc1726
There were a number of proposals, and three finalists, with SIPP being chosen:
* https://datatracker.ietf.org/doc/html/rfc1752
> I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.
The primary reason for IPng was >32 bits of address space. The only way to make them shorter is to have fewer bits, which completely defeats the purpose of the endeavour.
There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.
This is a lot of basically sharpshooting, but I will address your last point:
> There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.
That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers that could have been used to flag that the first N bytes of the payload were an additional IPv4.1 header indicating additional routing information. Packets would continue to transit existing networks and "4.1" capable boxes at edges could read the additional information to make further routing decisions inside of a network. It would have effectively used IPv4 as the core transport network and each connected network (think ASN) having a handful of routed /32s.
Overlay networks are widely deployed and have very minor technical issues.
But that would have only addressed the numbering exhaustion issues. Engineers often get caught in the "well if I am changing this code anyway" trap.
An explicit goal of IPv6 considered as important as the address expansion was the simplification of the packet header, by having fewer fields and which are correctly aligned, not like in the IPv4 header, in order to enable faster hardware routing.
The scheme described by you fails to achieve this goal.
5 replies →
> That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers […]
Great, there's an extra bit in the IPv4 packet header.
I was talking about the data structures in operating systems: are there any extra bits in the sockaddr structure to signal things to applications? If not, an entirely new struct needs to be deployed.
And that doesn't even get into having to deploy new DNS code everywhere.
But v6 did do what you're describing here?
They didn't use the reserved bit, because there's a field that's already meant for this purpose: the next protocol field. Set that to 0x29 and it indicates that the first bytes of the payload contain a v6 address. Every v4 address has a /48 of v6 space tunnelled to it using this mechanism, and any two v4 addresses can talk v6 between them (including to the entire networks behind those addresses) via it.
If doing basically exactly what you suggested isn't enough to stop you from complaining about v6's designers, how could they possibly have done any better?
Imo they should have just clawed 1 or 2 bits out of the ipv4 header for additional routing and called it good enough
This would require new software and new ASICs on all hosts and routers and wouldn't be compatible with the old system. If you're going to cause all those things, might as well add 96 new bits instead of just 2 new bits, so you won't have the same problem again soon.
IPv6 is literally just IPv4 + longer addresses + really minor tweaks (like no checksum) + things you don't have to use (like SLAAC). Is that not what you wanted? What did you want?
And what's wrong with a newer version of a thing solving all the problems people had with it...?
There are more people than IPv4 addresses, so the pigeonhole principle says you can't give every person an IPv4 address, never mind when you add servers as well. Expanding the address space by 6% does absolute nothing to solve anything and I'm confused about why you think it would.