Comment by bheadmaster
2 years ago
> Proper mitigations I think stem from proper design and threat modeling. Strong, reality-based statements like "this kills these vulnerabilities," or "this kills this CVE; it delays production of an exploit by one week." And also thorough testing by seasoned exploit writers. Anything else is relying on pure luck, superstition, and wishful thinking.
The comment seems to imply that "proper design and threat modeling" must stem from real-world CVE-s and proofs of concept. That seems to me like "if nobody heard it, the tree didn't fall" kind of thinking.
I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves. And fortunately, they don't have a manager above them to whom they need to justify their billing hours.
>I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves
Why? On average programmers are not very good security engineers. And the opposite - security engineers are often not a good programmers. If your mitigation doesn't stop any CVE that's being exploited right now in the wild, it's an academic exercise and not particularly useful IMO.
>And fortunately, they don't have a manager above them to whom they need to justify their billing hours.
The point of the thread is that the mitigation cost right now may be low (the "billing hours"), but it's paid in perpetuity by everyone else downstream - in complexity, performance, unexpected bugs, etc. So having a manager or BDFL to evaluate the tradeoffs may be beneficial.
> If your mitigation doesn't stop any CVE that's being exploited right now in the wild, it's an academic exercise and not particularly useful IMO.
If your only metric of security is "fixed CVEs", then you're rewarding mistakes that were rectified later, and punishing proactive approach to security that actually makes fewer CVEs appear in the first place.
And Theo's reputation and influence on the security is evidence that what he does is more than just "academic exercise". E.g. he created OpenSSH.
> The point of the thread is that the mitigation cost right now may be low (the "billing hours"), but it's paid in perpetuity by everyone else downstream - in complexity, performance, unexpected bugs, etc.
While that may or may not be the pattern in general, it is not a rule, and especially doesn't apply in OpenBSD development. OpenBSD is widely regarded as one of the cleanest and most robust (free software) codebases ever.
You're mischaracterizing their logic. They're saying it's a necessary but not sufficient metric. You can't then shoot it down for being not-sufficient; we all agree about that.
It's not my recollection that Theo created OpenSSH, for what it's worth. My memory of this is that it was mostly Niels and Markus who did the lifting.
You might do some digging on Theo's reputation among exploit developers. It's complicated.
1 reply →
> E.g. he created OpenSSH.
OpenSSH is a fork of Tatu Ylönen's SSH from when it was not proprietary.
>> I'm sure OpenBSD developers have very good intuition on what could be used in a vulnerability, without having to write one themselves
> Why?
Exactly, POCOGTFO! :)
But wouldn't providing such a proof-of-concept implementation immediately render a bull's eye on all pre -current (and/or not appropriately syspatched) boxes in the wild?
That’s why you invest in closing the patch gap.
I wouldn't call OpenBSD programmers average.
They famously do not. That's OK, it's a trait shared by a lot of hardening developers on other platforms, too --- all of them are better at this than I'll ever be. But the gulf of practical know-how between OS developers and exploit developers has been for something like 2 decades now a continuing source of comedy. Search Twitter for "trapsled", or "RETGUARD", for instance.
> But the gulf of practical know-how between OS developers and exploit developers has been for something like 2 decades now a continuing source of comedy
Are you implying that OS developers are 2 decades behind exploit developers? If so, is there any proof of that claim, e.g. OpenBSD exploits?
Or are you implying that OS developers are 2 decades ahead of exploit developers? If so, how is that a bad thing?
Neither, I'm saying that for the past 2 decades, the conventional wisdom in the space has been that OS hardening efforts were some significant quantum of time behind exploit developers, but certainly not "2 decades" worth.
It's an aggregate sentiment, right? There are some mitigations that I think legitimately did set back exploit development, but on the whole I think the sentiment has been that OS hardening mitigations have been not just reactive, but reactive to exploit development that is some significant quantum of time behind the current state of the art.
By way of example, I think people made fun of the original OpenBSD system call mitigation stuff described at the beginning of this post. I have no idea what the consensus would be on this new iteration of the idea.