Comment by vngzs

2 years ago

Indeed, in CCC's "systematic evaluation of OpenBSD's mitigations"[0] the presenter explicitly calls out OpenBSD's tendency to present mitigations without specific examples of CVEs it defeats or exploit techniques the mitigations are known to defend against:

> Proper mitigations I think stem from proper design and threat modeling. Strong, reality-based statements like "this kills these vulnerabilities," or "this kills this CVE; it delays production of an exploit by one week." And also thorough testing by seasoned exploit writers. Anything else is relying on pure luck, superstition, and wishful thinking.

Some of OpenBSD's mitigations are excellent and robust in defensiveness; others are amorphous and not particularly useful.

[0]: https://youtu.be/3E9ga-CylWQ?feature=shared&t=2770

> Proper mitigations I think stem from proper design and threat modeling. Strong, reality-based statements like "this kills these vulnerabilities," or "this kills this CVE; it delays production of an exploit by one week." And also thorough testing by seasoned exploit writers. Anything else is relying on pure luck, superstition, and wishful thinking.

The comment seems to imply that "proper design and threat modeling" must stem from real-world CVE-s and proofs of concept. That seems to me like "if nobody heard it, the tree didn't fall" kind of thinking.

I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves. And fortunately, they don't have a manager above them to whom they need to justify their billing hours.

  • >I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves

    Why? On average programmers are not very good security engineers. And the opposite - security engineers are often not a good programmers. If your mitigation doesn't stop any CVE that's being exploited right now in the wild, it's an academic exercise and not particularly useful IMO.

    >And fortunately, they don't have a manager above them to whom they need to justify their billing hours.

    The point of the thread is that the mitigation cost right now may be low (the "billing hours"), but it's paid in perpetuity by everyone else downstream - in complexity, performance, unexpected bugs, etc. So having a manager or BDFL to evaluate the tradeoffs may be beneficial.

    • > If your mitigation doesn't stop any CVE that's being exploited right now in the wild, it's an academic exercise and not particularly useful IMO.

      If your only metric of security is "fixed CVEs", then you're rewarding mistakes that were rectified later, and punishing proactive approach to security that actually makes fewer CVEs appear in the first place.

      And Theo's reputation and influence on the security is evidence that what he does is more than just "academic exercise". E.g. he created OpenSSH.

      > The point of the thread is that the mitigation cost right now may be low (the "billing hours"), but it's paid in perpetuity by everyone else downstream - in complexity, performance, unexpected bugs, etc.

      While that may or may not be the pattern in general, it is not a rule, and especially doesn't apply in OpenBSD development. OpenBSD is widely regarded as one of the cleanest and most robust (free software) codebases ever.

      3 replies →

    • >> I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves

      > Why?

      Exactly, POCOGTFO! :)

      But wouldn't providing such a proof-of-concept implementation immediately render a bull's eye on all pre -current (and/or not appropriately syspatched) boxes in the wild?

      1 reply →

  • They famously do not. That's OK, it's a trait shared by a lot of hardening developers on other platforms, too --- all of them are better at this than I'll ever be. But the gulf of practical know-how between OS developers and exploit developers has been for something like 2 decades now a continuing source of comedy. Search Twitter for "trapsled", or "RETGUARD", for instance.

    • > But the gulf of practical know-how between OS developers and exploit developers has been for something like 2 decades now a continuing source of comedy

      Are you implying that OS developers are 2 decades behind exploit developers? If so, is there any proof of that claim, e.g. OpenBSD exploits?

      Or are you implying that OS developers are 2 decades ahead of exploit developers? If so, how is that a bad thing?

      1 reply →

OpenBSD disabled hyperthreading before speculative execution attacks were in the wild. In the words of Greg K-H “OpenBSD was right”.

There probably is some amount of security theatre in OpenBSD but they have also mitigated attacks which weren’t even known to exist.

  • >they have also mitigated attacks which weren’t even known to exist

    Indeed, I'm reminded of some other comments that tptacek made in a recent thread, about how encrypting vulnerability disclosures "just isn't done":

    https://news.ycombinator.com/item?id=38569179

    I'll bet the NSA is very happy about this situation and is doing everything they can to keep the gravy train rolling.

    I thought the entire point of being a good security person was that you're able to anticipate and defend against attacks before they become known... Isn't that what "security mindset" is supposed to entail?

    • NSA doesn't care about your emailed vulnerability report. They're not spending their own money when they buy zero-day bug chains in platforms people actually use, and even if they were, those bug chains are so ludicrously cheap relative to their utility that any sigint (or law enforcement, for that matter) organization in the world, from Canada to El Salvador, can cheerfully afford them.

      Even if your emailed report was a complete bug chain and not, like, an X-Frame-Options redressing issue, it would be harder, and probably more expensive, for NSA to pick the bug up from email than it would be for them to simply fill out a purchase order from one of their private partners.

      As always it is helpful to remember as well that NSA's mission is to secure budget for NSA, full stop.

      1 reply →

    • He’s not wrong, though. Security researchers don’t use PGP when reporting vulnerabilities typically.

There have been cases where OpenBSD's hypothetical mitigations have worked out well for the project. I recall a relatively recent DNS cache poisoning attack that OpenBSD was novel in pre-emptively mitigating because something (I think it was the port?) was "needlessly" random.

If a mitigation has negligible performance impact, and doesn't introduce a new attack vector, I can't imagine why it would be seen as a bad thing.

  • > If a mitigation has negligible performance impact, and doesn't introduce a new attack vector, I can't imagine why it would be seen as a bad thing.

    Because it creates confusion about your threat model, which can ultimately weaken your security.