v6 has nearly 3 billion users. How is that abysmal?
We've never done something like the v4->v6 migration before, on this sort of scale. It's not clear what the par time for something like this is. Maybe 30 years is a normal amount of time for it to take?
HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
3 billion people sorta use ipv6, but not really, cause almost all of those also rely on ipv4 and no host can really go ipv6-only. Meanwhile, many sites are HTTPS-only.
And because it's a layer 7 thing, so it only required updating the server and client software, not the OS... and only the client and server endpoints and not the routers in between... and because we only have two browser vendors who between them can push the ecosystem around, and maybe half a dozen relevant web server daemons.
Layer 3 of the Internet is the one that requires support in all software and on all routers in the network path, and those are run by millions of people in hundreds of countries with no central entity that can force them to do anything.
HTTP->HTTPS is only similar in terms of number of users, not in terms of the deployment itself. The network effects for IP are much stronger than for HTTP.
They don't "sorta" use v6, they're properly using it, and you can certainly go v6-only. I'm posting from a machine with no v4. Also, if you want to go there: HTTPS was released before IPv6, and yet still no browser is HTTPS only, despite how much easier it is to deploy it.
> HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
HTTP->HTTPS is not equivalent in any way. The payload in HTTP and HTTPS are exactly the same; HTTPS simply adds a wrapper (e.g., stunnel can be used with an HTTP-only web server). Further HTTP(S) is only on the end points, and specifically in the application layer: your OS, switch, firewall, CPE, ISP router(s), etc, all can be left alone.
If you're not running a web browser or web server (i.e., FTP, SMTP, DNS, database) then there are zero changes that need to be made to any code on a system. This is not true for changing the number of bits the addressing space: every piece of code that calls socket(), bind(), connect(), etc, has to be touched.
Whereas the primary purpose of IPng was to expand the address space, which means your OS, switch, firewall, CPE, ISP router(s), etc, all have to be modified to handle more address bits in the Layer 3 protocol data unit.
Plus stuff at the application layer like DNS (since A records are 32-bit only, you need an entire new network type): entire new library functions had to be created (e.g., gethostbyname() replaced by getaddrinfo()).
I hear people say the IETF/IP Wizards of the 1990s should have "just" picked an IPng that was a larger address space, but don't explain how IPv4 and hypothetical IPv4+ would actually work. Instead of 1.1.1.1, a packet comes in with 1.1.1.1.1.1.1.1: how would a non-IPv4+ router know what to do with that? How would non-updated routers and firewalls be able to handle longer addresses? How would non-updated DNS code be able to handle new record types with >32 bits?
v6 has nearly 3 billion users. How is that abysmal?
We've never done something like the v4->v6 migration before, on this sort of scale. It's not clear what the par time for something like this is. Maybe 30 years is a normal amount of time for it to take?
HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
3 billion people sorta use ipv6, but not really, cause almost all of those also rely on ipv4 and no host can really go ipv6-only. Meanwhile, many sites are HTTPS-only.
And because it's a layer 7 thing, so it only required updating the server and client software, not the OS... and only the client and server endpoints and not the routers in between... and because we only have two browser vendors who between them can push the ecosystem around, and maybe half a dozen relevant web server daemons.
Layer 3 of the Internet is the one that requires support in all software and on all routers in the network path, and those are run by millions of people in hundreds of countries with no central entity that can force them to do anything.
HTTP->HTTPS is only similar in terms of number of users, not in terms of the deployment itself. The network effects for IP are much stronger than for HTTP.
They don't "sorta" use v6, they're properly using it, and you can certainly go v6-only. I'm posting from a machine with no v4. Also, if you want to go there: HTTPS was released before IPv6, and yet still no browser is HTTPS only, despite how much easier it is to deploy it.
3 replies →
> HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
HTTP->HTTPS is not equivalent in any way. The payload in HTTP and HTTPS are exactly the same; HTTPS simply adds a wrapper (e.g., stunnel can be used with an HTTP-only web server). Further HTTP(S) is only on the end points, and specifically in the application layer: your OS, switch, firewall, CPE, ISP router(s), etc, all can be left alone.
If you're not running a web browser or web server (i.e., FTP, SMTP, DNS, database) then there are zero changes that need to be made to any code on a system. This is not true for changing the number of bits the addressing space: every piece of code that calls socket(), bind(), connect(), etc, has to be touched.
Whereas the primary purpose of IPng was to expand the address space, which means your OS, switch, firewall, CPE, ISP router(s), etc, all have to be modified to handle more address bits in the Layer 3 protocol data unit.
Plus stuff at the application layer like DNS (since A records are 32-bit only, you need an entire new network type): entire new library functions had to be created (e.g., gethostbyname() replaced by getaddrinfo()).
I hear people say the IETF/IP Wizards of the 1990s should have "just" picked an IPng that was a larger address space, but don't explain how IPv4 and hypothetical IPv4+ would actually work. Instead of 1.1.1.1, a packet comes in with 1.1.1.1.1.1.1.1: how would a non-IPv4+ router know what to do with that? How would non-updated routers and firewalls be able to handle longer addresses? How would non-updated DNS code be able to handle new record types with >32 bits?
7 replies →