Comment by 1dom

1 month ago

> an IPv4 NAT and an IPv6 default deny rule are equivalent in security: both uphold the invariant

Yes, you're correct, on some level, they are equivalent: in both cases, packets don't reach the target machine. That is one of the few levels on which they are equivalent.

> There's no basis for claiming the two schemes differ in the level of security provided.

Yes there is, this is basic secure architecture and secure by design principals. If you understand these principals, you will understand that the equivalence level you're talking about above leaves space for other security issues to creep in.

> you can configure an IPv6 firewall to pass traffic and you can configure a DMZ host or port forwarding in the NAT case.

IPv4 & NAT config: takes effort to accidentally expose things behind it. It's not even physically possible to fully expose all the ports of more than 1 host behind it, assuming it's only got 1 public IP. For IPv6 and firewalls, you've just pointed out how easy it is to configure it to not have this security property.

I'm not arguing that IPv6 is not secure because it lacks NAT. My point was that this entire discussion is silly engagement bait: there's no clear right answer, but it's an easy topic for dogma and engagement. A holywars topic like NAT, IPv6 and security is prime for that. The author and submitter muddies the waters further by - probably not intentionally - choosing a strawman submission title.

> Yes there is, this is basic secure architecture and secure by design principals

The only principles at work here are the ones of superstition and magical thinking. The existence of a "disable security" button doesn't weaken the theoretical security properties of a system when that button isn't pressed, and NAT systems and pure firewalls alike have this button.

If anything, NAT systems are sometimes worse due to things like uPNP automating the button-pushing.

Look: I just don't accept the premise that making a system more flexible makes it less secure. If your threat model includes user error, then you have to be against user freedom to achieve security guarantees.

The amount of "effort" it takes to disable security measures has no bearing on the security of the system when properly configured, and how easy you make it to disable safeguards is a matter of UX design and the tolerance your users have for your paternalism, not something that we should put in a threat model.

  • I think most of the comments on this thread crystallise two different conception of security: the intended one and the effective one.

    The second one is messy to measure, it requires making statistics on how often NAT saved the day by accident, which is hard if not impossible.

    I personally think that statistics always win, even if they are unexplainable. My bet (zero proof) is, IPv4 is statistically (maybe by accident) more secure than IPv6, just because of NAT.

    I have seen so many horrors in terms of multiple NATs I will always prefer IPv6, also because I think the benefits outweigh by far the difference in _effective_ security.

    Summary: yes, IPv4 is more secure, but the difference is so marginal that IPv6 is still way better. Security is not the only metric in my world and theoretical discussions obsessing about a single metric are pointless.

    • I see the split too. I'll add that each camp is frustrated and feels the other is missing the point and would make information security worse if its worldview won.

      You can do some empirical analysis. Someone downthread linked to a paper claiming to being able to reach a few million vulnerable devices over IPv6 and not IPv4. This kind of analysis isn't dispositive, though, because there are all sorts of second-order effects and underlying philosophical differences. Facts seldom change minds when you can build multiple competing true stories around these facts.

      I'll call one camp the "veterans". They see security mostly as a matter of increasing the costs incurred by attackers relative to defenders, looking at the system holistically. Anything that increases attacker workload is good, even if it's an unintentional side effect of something else or interacts with software architecture in a cumbersome way. It's vibes-bases: whether a give intervention is "worth it" is an output of a learned function that gives in the stomach of a seasoned security researcher who's seen shit.

      The other camp I'll call the "philosophers". (My camp.) The perspective here is to build security like Euclid's elements, proving one invariant at a time, using earlier proofs to make progressively more capable systems, each proven secure against a class of threat so long as enumerated assumptions hold. They read security as an integral part of system architecture. Security comes from simplicity, as complexity and corner cases are the enemy of assurance.

      The veterans see the philosophers as incoherent. There's no such thing as a safe system: only one not yet compromised. You can't solve problems for good anyway, so there's no use trying to come up with axioms. Throw away the damn compass and strait edge and just draw siege map in the dirt with a stick.

      The philosophers see the veterans as short-term-oriented defeatists who make it harder to reach levels of provable security that can solve problems once and for all so we don't have to worry about them anymore. You have to approach complex systems piece by piece or you can't understand them at all -- and worse, you'll do things in the name of security gutfeels that compromise other goals without payoff that feels worth it to them. They say, "Without my compass and straightedge, how can I design my star fort with firing lines I know cover every possible approach?"

      The divide shows up in various projects. TLS is a philosopher project. Certificate transparency is a veteran project. Stack canaries are a veteran project. Shadow call stacks are a philosopher project. I think you get the point.

      This thread reveals a surprising split between veterans and philosophers on NAT. In retrospect, it's kinda obvious that the veterans would insist that "duh, of course IPv4 prevents inbound connections and it must because otherwise the Internet won't work", and the philosopher camp is "Hold up. One thing at a time. What's the actual goal? How can we achieve this goal minimally without side effects on Internet routing?"

      My camp sees the NAT configuration issue as a red herring. We see "the UX makes it too easy to run unsafe" as an HCI issue distinct from the underlying network architecture. The veterans say "Well, you can't build that button if you have NAT, so we are led not into temptation."

      Both camps have something to contribute, I think, but the divide will never fully disappear.

      1 reply →