Comment by qazwsxedchac
19 hours ago
So a single configuration mistake in a single place wiped out external reachability of a major economy. It happened in the evening local time and should be fixable, modulo cache TTLs, by morning. This will limit the blast radius somewhat.
Still, at this level, brittle infrastructure is a political risk. The internet's famous "routing around damage" isn't quite working here. Should make for an interesting post mortem.
I am reminded of the warning that zonemaster gives about putting your domain name servers on a single AS, as is common practice for many larger providers. A lot of people do not want others to see this as a problem since a single AS is a convenient configuration for routing, but it has the downside of being a single point of failure.
Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
It's simple enough to get a secondary DNS server somewhere and put it on $5/month VPS. I use BIND and DNS replication (AXFR/IXFR) handles it.
Have you ANY clue about the size of .DE's name server infrastructure?
1 reply →
As the CPU/RAM resources to run an authoritative-only slave nameserver for a few domains are extremely minimal (mine run at a unix load of 0.01), it's a very wise idea to put your ns3 or something at a totally different service provider on another continent. It costs less than a cup of coffee per month.
For a very long time, the computer club I was in operated a DNS server on a Pentium 75MHz and after the last major hardware upgrade it had a total of 110MB RAM memory and 2G disk space. It worked great except that before the upgrade it tended to run out of ram whenever there was a Linux kernel update, a problem we solved forever by populating all the ram slots with the maximum that the motherboard could handle to that nice 110 MB.
1 reply →
This makes sense for larger providers but just for a small/personal website there is literally zero advantages to having distributed authoritative DNS servers when the webserver is on a single host.
Ironically, denic still requires you to have two separate name servers with different IPs for your domain (which can be worked around by changing the IP of the registered name server afterwards lol), a requirement that all other registries I use have dropped or never had because enforcing such a policy at the registry level makes zero sense.
2 replies →
On Google cloud it's always four nameservers like
Would not make any sense to do four of them if it's a single AZ. Also, they are geo-aware and routed to your nearest region.
Are you conflating autonomous system (AS) with availability zone (AZ)?
1 reply →
DNS is a centralization risk, yes. Somehow we've decided this is fine. DNSSEC isn't the only issue - your TLD's nameservers could also be offline, or censored in your country.
DNS is barely centralized. Is there an alternative global name lookup system that is less centralized without even worse downsides?
The blockchain.
The only thing a blockchain is good for is achieving decentralized consensus on what value a key points to, which is what DNS is.
An alternative way of looking at this is that acquiring domains must be somewhat expensive by definition; either you enforce it at the system level, or you make it free, but then somebody will inevitably grab all the interesting ones and re-sell them to others. A blockchain is the only way to make decentralized financial infrastructure viable.
GP said it was a risk (and it is), not that there are better alternatives. Not all risks can be eliminated easily but you should still be aware of them.
GNS is the obvious response here, in addition to the various blockchain based solutions. Nothing that enjoys widespread support or mindshare unfortunately.
Even the current centralized ICANN flavor could be substantially more resilient if it instead handed out key fingerprints and semi-permanent addresses when queried. That way it would only ever need to be used as a fallback when the previously queried information failed to resolve.
BGP, but the names in question are limited to 128 bits, of which at most 48 will be looked up, and you don't get to choose which 48 bits are assigned to you.
Normally it should not have been, with cache and all, but that was the past...
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
Let's Encrypt has to be down for days before people begin to feel the pain. DNS is very different, it breaks stuff immediately everywhere.
8 replies →
Not really? .com and .net are still up
If Let's Encrypt goes down, half of the Internet will become inaccessible in a week.
Presumably if LetsEncrypt goes down and stays down for a week, the sites that go down are the ones that see that their CA went down and at no point in the week take the option to get certs from a different CA?
5 replies →
So it seems we need something like this [1] for IT infrastructure? ;)
[1] https://outerspaceinstitute.ca/crashclock/
"The internet's famous "routing around damage" isn't quite working here."
DNS is a look up service that runs on the internet.
Internet routing of IP packets is what the internet does and that is working fine (for a given value of fine).
You remind me of someone using the term "the internet is down" that really means: "I've forgotten my wifi password".
Us non pod-people caught his drift.
What's a pod-people?
The more interesting question is, could a political adversary do this to a country on purpose, and how hard would that be?
fail-closed protocols have introduced some brittleness. A HTTP 1.0 server from 1999 probably still can service visitors today. A HTTPS/TLS 1.0 server from the same year wouldn't.
I think I see the point you're making here and I agree.
There is designing something to be fail-closed because it needs to be secure in a physical sense (actually secure, physically protected), and then there's designing something fail-closed because it needs to be secure from an intellectual sense (gatekept, intellectually protected). While most of the internet is "open source" by nature, the complexity has been increased to the point where significant financial and technical investment must be made to even just participate. We've let the gatekeepers raise the gates so high that nobody can reach them. AI will let the gatekeepers keep raising the gates, but then even they won't be able to reach the top. Then what?
I think the point you're trying to make, put another way is in the context of "availability" and "accessibility" we've compromised a lot of both availability and accessibility in the name of security since the dawn of the internet. How much of that security actually benefits the internet, and how much of that security hinders it? How much of it exists as a gatekeeping measure by those who can afford to write the rules?
Backwards compatibility is unfortunately not something security folk care about.
This is why I still run my blog on HTTP/1.1 only.
What no HTTP/1.0 for those of us too lazy to type the Host header into telnet???
1 reply →
You're not wrong but objecting to fail-closed in a security sensitive context is entirely missing the point.
> So a single configuration mistake in a single place wiped out external reachability of a major economy.
Real world beats sci-fi :) And isn't it why we love IT ? And hate it too, because of "peoples in charge"...
>So a single configuration mistake in a single place wiped out external reachability of a major economy.
And fuck nothing at all happened as a result.
Prove it? I’m sure many lifespans were lost to stress
As someone with oncall yesterday it was a fun experience, but you noticed quickly that everything .de was down and then it was just a waiting game.
We had a short discussion about migrating to .com, but decided risk != reward as no one would know the new tld
I assume there are a couple people working for denic who had a stressfull night..
I have a bad feeling, that the impact will be quite severe for some services, as monitoring, performance, and security services might get disrupted. and just cleaning up is a big mess.. Worst case, some ot will experience outage and / or damage. But maybe I am just overestimating the severity of this.
There is the kritis (critical infrastructure law) law, which trys to enforce some standards to make things not as brittle.
It looks like a failed key replacement during a scheduled maintenance event. Normally this sort of thing is thoroughly tested and has multiple eyes on for detailed review and planning before changes get committed, but obviously something got missed.
Would be interesting to know how something could get missed. You'd think the system was set up so that new keys could not be published without being verified working in a staging system.
... wiped out external reachability of a major economy ...
internal reachability (from Germany to .de domains), too... :-)))
> The internet's famous "routing around damage"
...is only for Pentagon networks and military stuff. It's not for us normal people. (We get Cloudflare and FAANG bullshit instead.)
This is actually startlingly true.
Every FAANG company has their own fiber backbone. Why invest the internet that everyone uses when you can invest in your own private internet and then sell that instead?
It's not like the long-haul fiber not owned by FAANG is a public utility, at least not in most places.
Traffic that goes over "the Internet" traverses some mix of your ISP's fiber, fiber belonging to some other ISP they have a deal with, then fiber belong to some ISP they have a deal with, etc.
All those ISPs are being paid to provide service, they can invest in their own networks.
1 reply →