Comment by TeMPOraL

4 years ago

> Not a big loss: these professors likely hate open source.

> They are conducting research to demonstrate that it is easy to introduce bugs in open source...

That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.

(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)

> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.

Ascribing a salutary motive to sabotage is just as dangerous as assuming a pernicious motive. Suggesting that people "would" likely follow one course of action or another is also dangerous: it is the oldest form of sophistry, the eikos argument of Corax and Tisias. After all, if publishing research rules out pernicious motives, academia suddenly becomes the best possible cover for espionage and state-sanctioned sabotage designed to undermine security.

The important thing is not to hunt for motives but to identify and quarantine the saboteurs to prevent further sabotage. Complaining to the University's research ethics board might help, because, regardless of intent, sabotage is still sabotage, and that is unethical.

The difference between:

"Dear GK-H: I would like to have my students test the security of the kernel development process. Here is my first stab at a protocol, can we work on this?"

and

"We're going to see if we can introduce bugs into the Linux kernel, and probably tell them afterwards"

is the difference between white-hat and black-hat.

  • It should probably be a private email to Linus Torvalds (or someone in his near chain of patch acceptance), that way some easy to scan for key can be introduced in all patches. Then the top levels can see what actually made it through review, and in turn figure out who isn't reviewing as well as they should.

    • Yes, someone like Greg K-H. I'm not up to date on the details, but he should be one of most important 5 people caring for the kernel tree, this would've been the exact person to seek approval.

Auditability is at the core of its advantage over closed development.

Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

To adress your first critic: benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm. Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

  • > Auditability is at the core of its advantage over closed development.

    That's an assertion. A hypothesis is verified through observing the real world. You can do that in many ways, giving you different confidence levels in validity of the hypothesis. Research such as the one we're discussing here is one of the ways to produce evidence for or against this hypothesis.

    > Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

    It is if there's a review process. Auditability itself is really most interesting before a patch is accepted. Sure, it's nice if vulnerabilities are found eventually, but the longer that takes, the more likely it is they were already exploited. In case of an intentionally bad patch in particular, the window for reverting it before it does most of its damage is very small.

    In other words, the experiment wasn't testing the entire auditability hypothesis. Just the important part.

    > benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm

    Sure. But the project scope matters. Linux kernel isn't some random OSS library on Github. It's core infrastructure of the planet. Assumption of benevolence works as long as the interested community is small and has little interest in being evil. With infrastructure-level OSS projects, the interested community is very large and contains a lot of malicious actors.

    > Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

    I agree, and in my books, if a legitimate researcher gets banned for such "undercover" research, it's just the flip side of doing such experiment.

    • I will not adress everything but only this point:

      Before a patch is accepted, "auditability" is the same in OSS vs in proprietary, because both pools of engineers in the review groups have similar qualifications and approximatively the same number of people are involved.

      So, the real advantage of OSS is on the auditability after the patch is integrated.

      3 replies →