Comment by tptacek
2 years ago
Classic thread on this stuff from Halvar Flake:
https://twitter.com/halvarflake/status/1156815950873804800
With that in mind, it'd be handy to know which exploit techniques these steps break, and whether those steps are in the current "meta" game for exploit developers.
(The specific mitigation here: the kernel formerly locked system call invocation down to the libc.so area of program text in memory; libc.so is big, so now OpenBSD locks specific system calls down to their specified libc stubs; further, in static binaries, the same mechanism locks programs down to only those system calls used in the binary, which effectively disables all the system calls not explicitly invoked by the program text of a static binary).
Indeed, in CCC's "systematic evaluation of OpenBSD's mitigations"[0] the presenter explicitly calls out OpenBSD's tendency to present mitigations without specific examples of CVEs it defeats or exploit techniques the mitigations are known to defend against:
> Proper mitigations I think stem from proper design and threat modeling. Strong, reality-based statements like "this kills these vulnerabilities," or "this kills this CVE; it delays production of an exploit by one week." And also thorough testing by seasoned exploit writers. Anything else is relying on pure luck, superstition, and wishful thinking.
Some of OpenBSD's mitigations are excellent and robust in defensiveness; others are amorphous and not particularly useful.
[0]: https://youtu.be/3E9ga-CylWQ?feature=shared&t=2770
> Proper mitigations I think stem from proper design and threat modeling. Strong, reality-based statements like "this kills these vulnerabilities," or "this kills this CVE; it delays production of an exploit by one week." And also thorough testing by seasoned exploit writers. Anything else is relying on pure luck, superstition, and wishful thinking.
The comment seems to imply that "proper design and threat modeling" must stem from real-world CVE-s and proofs of concept. That seems to me like "if nobody heard it, the tree didn't fall" kind of thinking.
I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves. And fortunately, they don't have a manager above them to whom they need to justify their billing hours.
>I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves
Why? On average programmers are not very good security engineers. And the opposite - security engineers are often not a good programmers. If your mitigation doesn't stop any CVE that's being exploited right now in the wild, it's an academic exercise and not particularly useful IMO.
>And fortunately, they don't have a manager above them to whom they need to justify their billing hours.
The point of the thread is that the mitigation cost right now may be low (the "billing hours"), but it's paid in perpetuity by everyone else downstream - in complexity, performance, unexpected bugs, etc. So having a manager or BDFL to evaluate the tradeoffs may be beneficial.
7 replies →
They famously do not. That's OK, it's a trait shared by a lot of hardening developers on other platforms, too --- all of them are better at this than I'll ever be. But the gulf of practical know-how between OS developers and exploit developers has been for something like 2 decades now a continuing source of comedy. Search Twitter for "trapsled", or "RETGUARD", for instance.
2 replies →
OpenBSD disabled hyperthreading before speculative execution attacks were in the wild. In the words of Greg K-H “OpenBSD was right”.
There probably is some amount of security theatre in OpenBSD but they have also mitigated attacks which weren’t even known to exist.
>they have also mitigated attacks which weren’t even known to exist
Indeed, I'm reminded of some other comments that tptacek made in a recent thread, about how encrypting vulnerability disclosures "just isn't done":
https://news.ycombinator.com/item?id=38569179
I'll bet the NSA is very happy about this situation and is doing everything they can to keep the gravy train rolling.
I thought the entire point of being a good security person was that you're able to anticipate and defend against attacks before they become known... Isn't that what "security mindset" is supposed to entail?
3 replies →
OpenBSD doesn't even have hyperthreading? Why does anyone use this OS? The Linux developers put in a lot of effort to make hyperthreading actually work for their kernel rather than ignoring it.
1 reply →
There have been cases where OpenBSD's hypothetical mitigations have worked out well for the project. I recall a relatively recent DNS cache poisoning attack that OpenBSD was novel in pre-emptively mitigating because something (I think it was the port?) was "needlessly" random.
If a mitigation has negligible performance impact, and doesn't introduce a new attack vector, I can't imagine why it would be seen as a bad thing.
> If a mitigation has negligible performance impact, and doesn't introduce a new attack vector, I can't imagine why it would be seen as a bad thing.
Because it creates confusion about your threat model, which can ultimately weaken your security.
Every mitigation is code and complexity. There is always a cost.
2 replies →
> Classic thread on this stuff from Halvar Flake:
That's from four years ago and does not address these technical issues. Are you going to pull it out every time OpenBSD is mentioned? I think people understand that you don't like their approach, etc., and the flaws you see, and that OpenBSD isn't designed for your interests.
>I think people understand that you don't like their approach, etc., and the flaws you see, and that OpenBSD isn't designed for your interests.
OpenBSD isn't designed from anyone's interests.
https://isopenbsdsecu.re/mitigations/
Only if you personally get to define other people's interests. Apparently they disagree!
"That's from four years ago" is a funny rebuttal to "classic thread".
https://nitter.net/halvarflake/status/1156815950873804800
for those who don't have an X a/c
Is there a current meta for OpenBSD exploit developers?
What's the right way to go about hardening the system if there's no meta to observe?
My very naive take would be something like: A successful exploit depends on jumping through a number of different hoops. Each of those hoops has an estimated success probability associated with it. We can multiply all the individual probabilities together to get an estimated probability of successful exploit -- assuming that hoop probabilities are independent, which seems reasonable? The most efficient way to harden against exploits is to try and shrink whichever hoop possesses the greatest partial derivative of overall exploit success probability with respect to developer time.
The meta doesn’t exist because nobody targets OpenBSD because it’s not used. People’s analysis of it is mostly just their educated guess as to how work for other platforms would carry over.
There is also "soft" mitigations vs "hard" mitigations guideline as described here [1].
It's handy when designing new mitigations when there's no meta game.
[1] https://googleprojectzero.blogspot.com/2023/08/mte-as-implem...
> The most efficient way to harden against exploits is to try and shrink whichever hoop possesses the greatest partial derivative of overall exploit success probability with respect to developer time.
Depending on your definition of efficient, adding more hoops should work exponentially better.
My definition of efficient is essentially whatever decreases the number of workable exploits most rapidly per hour of developer time.
>Depending on your definition of efficient, adding more hoops should work exponentially better.
Explain?
4 replies →