From someone mostly out of the drama loop, here's my brief recollection:
Generally in the security sphere we consider it the most ethical and responsible to give vendors plenty of time to patch vulnerabilities, especially critical ones, before publishing details or anything that could lead to a working 0-day exploit.
Theo de Raadt was one of the people notified of a previous WiFi exploit, and there was a set length of time intended for the vulnerability to be made private, in order for the (inordinately slow) vendors to create and push/prepare patches. If the patches were released early, it'd be easy to determine what the original vulnerability was.
So, Theo de Raadt decided, in the interest of keeping OpenBSD secure, to push the patch early, effectively letting the whole cat out of the bag. I'm not going to get into the drama of whether that was right, wrong, foolish, wise, whatever, but because of that, he no longer receives these ahead-of-time notifications of vulnerabilities.
There are at least two things wrong with this comment. First, OpenBSD did not push the patch earlier than agreed. Second, OpenBSD did not push the patch without permission.
Mathy originally reported the vulnerability to OpenBSD on July 15 under embargo, and estimated it would be lifted by the end of August (1.5 months after disclosure). Theo argued that 1.5 months was too long, but didn’t push the patch. Then on August 14, Mathy said the final public disclosure date would be October 16 (three months after initial disclosure), but agreed to allow OpenBSD to patch early. Although he didn’t like it, and has since said he would not give such permission again, he agrees that OpenBSD did commit with his permission.
Direct quote from Mathy: “From my point of view, I sent one mail on 14 August where I mentioned the new disclosure date of 16 Oct. In that same mail I also gave the OK to quietly commit a fix.” https://marc.info/?l=openbsd-tech&m=152909822107104&w=2
People portray OpenBSD as a project that ignores embargoes, and point to KRACK as an example. But Theo didn’t ignore the KRACK embargo. Rather, Theo successfully persuaded Mathy to allow OpenBSD to patch the vulnerability a full month and a half after all vendors had been informed.
I’m commenting on this because I think simply pushing back against the length of an embargo should not be characterized as breaking an embargo.
There were a lot of vendors in the KRACK embargo. The risk of the vulnerability leaking to the black market or malicious governments is real. As the length of the embargo increases, this risk increases dramatically. Big vendors are incentivized to pressure researchers to extend the embargo as long as possible. Open source projects are forced to hold off on committing bugfixes, leaving their users potentially vulnerable. If a project pushes back against a long embargo, or through persuasion manages to finagle permission to release an unobtrusive fix “early,” that project is characterized as an untrustworthy embargo breaker and left out of future embargoes. So open source projects are incentivized to sit down, shut up, ignore the threat to their users, and let the big vendors dawdle in their bugfixes.
I have just one more thought on the matter. I'm still early in my career, but in the years I've spent so far working with small business-types on security, and watching my colleagues, a month and a half is practically no time at all. I have little love for the big vendors, especially for behavior like this, but the reality I've seen is they often take months to do anything, and it takes further months for customers to actually patch their systems.
So I'm a little sympathetic to the desire to have an embargo of half a year or even longer, even with the downsides mentioned. Still, Theo clearly didn't actually breach his trust with Mathy, that's my mistake.
> Generally in the security sphere we consider it the most ethical and responsible to give vendors plenty of time to patch vulnerabilities
Can you provide more context for this point? As somebody with some experience in infosec, I don’t think that’s actually so clear cut. There are people who believe coordinating with vendors is the right course, and people who believe embargoes compromise users’ ability to make safe choices. There are also people who think the right course depends on the individual vuln/system.
>Generally in the security sphere we consider it the most ethical and responsible
I would reword this to say
>Generally in the security sphere we consider it the most obedient
The earlier wording severely disadvantages the end-user of the opportunity to know that they are working with broken software and to find an alternative.
That's fair. It's the attitude I've seen the most of in the people I work with/around, and it's rubbed off on me a bit. There are definitely people who believe this is a disservice to the users, and I don't necessarily disagree with them.
Personally, I agree most with tptacek in another comment, that this is on a continuum, and depends on the vulnerability, situation, and who's involved. If there's a good faith effort to develop + push a patch to a very wide install base of hardware which realistically is being ignored by the sysadmins (no change of being replaced, and impacting people using them in e.g. public places), I think it can be ok to embargo details.
Probably referring to the internet drama related to silent patching and disclosure embargo. There are some details here, and others on various mailing lists, including an airing of differences if you want to look for that sort of thing after making a bowl of popcorn.
From someone mostly out of the drama loop, here's my brief recollection:
Generally in the security sphere we consider it the most ethical and responsible to give vendors plenty of time to patch vulnerabilities, especially critical ones, before publishing details or anything that could lead to a working 0-day exploit.
Theo de Raadt was one of the people notified of a previous WiFi exploit, and there was a set length of time intended for the vulnerability to be made private, in order for the (inordinately slow) vendors to create and push/prepare patches. If the patches were released early, it'd be easy to determine what the original vulnerability was.
So, Theo de Raadt decided, in the interest of keeping OpenBSD secure, to push the patch early, effectively letting the whole cat out of the bag. I'm not going to get into the drama of whether that was right, wrong, foolish, wise, whatever, but because of that, he no longer receives these ahead-of-time notifications of vulnerabilities.
There are at least two things wrong with this comment. First, OpenBSD did not push the patch earlier than agreed. Second, OpenBSD did not push the patch without permission.
Mathy originally reported the vulnerability to OpenBSD on July 15 under embargo, and estimated it would be lifted by the end of August (1.5 months after disclosure). Theo argued that 1.5 months was too long, but didn’t push the patch. Then on August 14, Mathy said the final public disclosure date would be October 16 (three months after initial disclosure), but agreed to allow OpenBSD to patch early. Although he didn’t like it, and has since said he would not give such permission again, he agrees that OpenBSD did commit with his permission.
Direct quote from Mathy: “From my point of view, I sent one mail on 14 August where I mentioned the new disclosure date of 16 Oct. In that same mail I also gave the OK to quietly commit a fix.” https://marc.info/?l=openbsd-tech&m=152909822107104&w=2
People portray OpenBSD as a project that ignores embargoes, and point to KRACK as an example. But Theo didn’t ignore the KRACK embargo. Rather, Theo successfully persuaded Mathy to allow OpenBSD to patch the vulnerability a full month and a half after all vendors had been informed.
I’m commenting on this because I think simply pushing back against the length of an embargo should not be characterized as breaking an embargo.
There were a lot of vendors in the KRACK embargo. The risk of the vulnerability leaking to the black market or malicious governments is real. As the length of the embargo increases, this risk increases dramatically. Big vendors are incentivized to pressure researchers to extend the embargo as long as possible. Open source projects are forced to hold off on committing bugfixes, leaving their users potentially vulnerable. If a project pushes back against a long embargo, or through persuasion manages to finagle permission to release an unobtrusive fix “early,” that project is characterized as an untrustworthy embargo breaker and left out of future embargoes. So open source projects are incentivized to sit down, shut up, ignore the threat to their users, and let the big vendors dawdle in their bugfixes.
Thanks for the clarification/correction.
I have just one more thought on the matter. I'm still early in my career, but in the years I've spent so far working with small business-types on security, and watching my colleagues, a month and a half is practically no time at all. I have little love for the big vendors, especially for behavior like this, but the reality I've seen is they often take months to do anything, and it takes further months for customers to actually patch their systems.
So I'm a little sympathetic to the desire to have an embargo of half a year or even longer, even with the downsides mentioned. Still, Theo clearly didn't actually breach his trust with Mathy, that's my mistake.
5 replies →
> Generally in the security sphere we consider it the most ethical and responsible to give vendors plenty of time to patch vulnerabilities
Can you provide more context for this point? As somebody with some experience in infosec, I don’t think that’s actually so clear cut. There are people who believe coordinating with vendors is the right course, and people who believe embargoes compromise users’ ability to make safe choices. There are also people who think the right course depends on the individual vuln/system.
It's not clear-cut, at all. It'd be hard to defend any claim premised on a broad agreement in the field about how to handle disclosure.
In Vanhoef's case, though, he's bound by standards his university has for this stuff, not just his own personal preferences.
1 reply →
>Generally in the security sphere we consider it the most ethical and responsible
I would reword this to say
>Generally in the security sphere we consider it the most obedient
The earlier wording severely disadvantages the end-user of the opportunity to know that they are working with broken software and to find an alternative.
That's fair. It's the attitude I've seen the most of in the people I work with/around, and it's rubbed off on me a bit. There are definitely people who believe this is a disservice to the users, and I don't necessarily disagree with them.
Personally, I agree most with tptacek in another comment, that this is on a continuum, and depends on the vulnerability, situation, and who's involved. If there's a good faith effort to develop + push a patch to a very wide install base of hardware which realistically is being ignored by the sysadmins (no change of being replaced, and impacting people using them in e.g. public places), I think it can be ok to embargo details.
https://www.krackattacks.com/
Probably referring to the internet drama related to silent patching and disclosure embargo. There are some details here, and others on various mailing lists, including an airing of differences if you want to look for that sort of thing after making a bowl of popcorn.