Comment by almosthere
5 days ago
But the desktop/laptop/phone/printer was the EASIEST thing to change in that 30 year history. And it would have been the easiest thing to demand a change req from a company for.
5 days ago
But the desktop/laptop/phone/printer was the EASIEST thing to change in that 30 year history. And it would have been the easiest thing to demand a change req from a company for.
Yes: but the process would have been exactly the same whether for a hypothetical IPv4+ or the IPng/IPv6 that was decided on; pushing new code to every last corner of the IP universe.
How could it have been otherwise given the original network structures were all of fixed lengths of 32 bits?
The new code would have been vastly simpler. IPv6 is second system syndrome personified.
What we needed was the equivalent of ASCII->UTF8.
If we have IPv4 address 1.2.3.4, and the hypothetical IPv4+ adds 1.2.3.4.1.2.3.4 (or longer), how would a IPv4-only router handle 1.2.3.4.1.2.3.4? If an IPv4-only host or application gets a DNS response with 1.2.3.4.1.2.3.4, how is it supposed to use it?
As I see it, the transition mechanism for some IPv4+ that 'only' has longer addresses is exactly the same as for IPv6: new code paths that use new data structures, with a gradual rollout with tech refreshes and code updates where hosts slowly go from IPv4-only to IPv4-and-IPv4+ at different rates in different organizations.
If you think it's somehow different, can you explain how it is so? What proposal available (especially when IPng was being decided on in the 1990s) would have allowed for a transition that is different than the one described above (gradual, uncoördinated rollout)?
* https://datatracker.ietf.org/doc/html/rfc1726
* https://datatracker.ietf.org/doc/html/rfc1752
1 reply →
As someone with non-ascii and non-latin-1 characters in my surname, I can tell you that the ascii->utf8 migration still hasn’t finished.
1 reply →
If you hand UTF-8 that actually uses anything added by utf-8 to something that can only render ASCII, the text will be garbled. People can read garbled text ok if it’s a few missing accented characters in a western language, but it’s no good for Japanese or Arabic.
In networking terms, this is like a protocol which can reach ipv4 hosts only but loses packets to the ipv4+ hosts randomly depending on what it passes through. Who would adopt a networking technology that fails randomly?