← Back to context

Comment by tguvot

7 days ago

company where i work has deployments across the world with few hundreds of thousands of hardware hosts (in datacenters), vms and containers + deployments in a few clouds. also a bunch of random hardware from multitude of vendors. multiple lines for linking datacenters and clouds. also some lines to more specific service providers that we are using.

all of it ipv4 based. ipv6 maybe in distant future somewhere on the edge in case our clients will demand in.

inside our network - probably not going to happen

I find this completely fine. I don't see much (if any) upside in migrating a large existing network to anything new at all, as long as the currently deployed IPv4 is an adequate solution inside it (and it obviously is).

Public-interfacing parts can (and should) support IPv6, but I don't see much trouble exposing your public HTTP servers (and maybe mail servers) using IPv6, because most likely your hosting / cloud providers do 99.9% of it already, out of the box (unless it's AWS, haha), and the rare remaining cases, like, I don't know, a custom VPN gateway, are not such a big deal to handle.

  • vast majority of our stuff is self hosted. http servers in a way are the least important way for our clients to work with us.

    amount of work to support ipv6 on the edge will be very big and none of our clients asked for it as far as i know.

    the only time we discussed it, it's when we were getting fedramp certification. because of this https://www.gsa.gov/directives-library/internet-protocol-ver...

I ran network team at an organization with hundreds of thousands hardware hosts in tens-of-megawatts large data centers, millions of VMs and containers, links between data centers, links to ISPs and IXes. We ran out of RFC1918 addresses at around 2011-2012 and went IPv6-only. IPv4 is delivered as a service to nodes requiring it via an overlay network. We intentionally simplified network design by doing so.

This is neither hard nor expensive.