Comment by sekh60
5 days ago
The amount of ignorance in these ipv6 posts is astounding (seems to be one every two months). It isn't hard at all, I'm just a homelabber and I have a dual-stack setup for WAN access (HE Tunnel is set up on the router since Bell [my isp] still doesn't give ipv6 address/prefixes to non-mobile users), but my OpenStack and ceph clusters are all ipv6 only, it's easy peasy. Plus subnetting is a heck of a lot less annoying that with ipv4, not that that was difficult either.
“it’s easy peasy” says guy who demonstrably already knows and has time to learn a bunch of shit 99.9% of people don’t have the background or inclination to.
People like you talking about IPv6 have the same vibe as someone bewildered by the fact that 99.9% of people can’t explain even the most basic equation of differential or integral calculus. That bewilderment is ignorance.
These people apparently had the time and inclination to learn a bunch of shit about IPv4, though.
"Easy" is meant in that context. The people acting like the IPv4 version is easy.
So your second paragraph doesn't fit the situation at all.
"The shit about IPv4" was easy to learn and well documented and supported.
"The shit about IPv6" is a mess of approaches that even the biggest fanboys can't agree on and are even less available on equipment used by people in prod.
IPv6 has failed wide adoption in 30 decades, calling it "easy" is outright denying the reality and shows the utter dumb obliviousness of people trying to push it and failing to realize where the issues are.
3 replies →
"I already know enough to be productive, can the rest of the world please freeze and stop changing?"
This is not even that unreasonable. Sadly, the number of IP devices in the world by now far exceeds the IPv4 address space, and other folks want to do something about that. They hope the world won't freeze but would sort of progress.
Network engineering is a profession requiring specific education. At a high level it’s not different from calculus. You learn certain things and then you learn how to apply them in the real life situations.
It’s not hard for people who get an appropriate education and put some effort into it. Your lack of education is not my ignorance.
Dude.
The difficulty of setting IPv6 up at your house vs. the needs of a multi-homed, geographically diverse enterprise couldn't be more dissimilar.
I'd lay off the judgment a bit.
I'd gladly listen about the difficulties of setting up enterprise networks! No irony; listening to experts is always enlightening.
BTW a homelab often tries to imitate more complex setups, in order to be a learning experience. Can these difficulties be modelled there?
company where i work has deployments across the world with few hundreds of thousands of hardware hosts (in datacenters), vms and containers + deployments in a few clouds. also a bunch of random hardware from multitude of vendors. multiple lines for linking datacenters and clouds. also some lines to more specific service providers that we are using.
all of it ipv4 based. ipv6 maybe in distant future somewhere on the edge in case our clients will demand in.
inside our network - probably not going to happen
5 replies →
I should have been gentler and less arrogant, yes. Sincerely though, please explain how ipv6 is in anyway more difficult than a properly set up ipv4 enterprise. What tools are not available?
I left my job as a NE/architect over a 15 years ago, but the show stopper back then revolved around how to handle routing with firewalling. Firewalling being biggest roadblock due to needing traffic symmetry. I'm doing my best to remember why we stopped at just providing v6 at the edge for site-specific Internet hosted services and never pushed it further.
Mind you, our team discussed this numerous times over a few years and never came up with a solution that didn't look like it would require us to completely fork-lift what we were doing. The whole team was FOR getting us to v6, so there was no dogmatic opposition.
Consider this:
25k employee company. Four main datacenter hubs spread out across the USA with 200 remote offices evenly dual-homed into any two of the four.
All four of the DCs had multi-ISP Internet access advertising their separate v4 blocks and hosting Internet services. The default-route was redistributed into the IGP from only two locations, site A and B. e.g. two of the four DCs were egress for Internet traffic from the population of users and all non-internet-facing servers. IGP metrics were gently massaged as to fairly equally use of both sites.
All outbound traffic flowed naturally out of the eastern or western sites based on IGP metrics. This afforded us a tertiary failover for outbound traffic in the event that both of the Internet links into one of the two egress sites was down. e.g., if both of site A's links (say, level-3 and att) were down, the route through site A was lost, and all the egress traffic was then routed out site B (and vice-versa). This worked well with ipv4 because we used NAT to masquerade all the internal v4 space as site X's public egress block. Therefore all the return traffic was routed appropriately.
BGP advertisements were either as-path prepended or supernetted (don't remember which) such that if site A went down, site B, C, or D would get its traffic, and tunnel it via GRE to the appropriate DC hub's external segment.
The difficulty was that traffic absolutely had to flow symmetrically because of the firewalls in place, and easily could for v4 because NAT was happening at every edge.
With v6 it just didn't seem like there was any way to achieve the same routing architecture / flexibility, particularly with multi-homing into geographically disparate sites.
I'm not sure anymore where we landed, but I remember it being effectively insurmountable. I don't think it was difficult for Internet-hosted services, but the effort seemed absolutely not worth it for everything on the inside of the network.