Comment by henearkr

4 years ago

Not a big loss: these professors likely hate open source. [edit: they do not. See child comments.]

They are conducting research to demonstrate that it is easy to introduce bugs in open source...

(whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)

[removed this ranting that does not apply since they are contributing a lot to the kernel in good ways too]

> Not a big loss: these professors likely hate open source.

> They are conducting research to demonstrate that it is easy to introduce bugs in open source...

That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.

(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)

> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.

  • Ascribing a salutary motive to sabotage is just as dangerous as assuming a pernicious motive. Suggesting that people "would" likely follow one course of action or another is also dangerous: it is the oldest form of sophistry, the eikos argument of Corax and Tisias. After all, if publishing research rules out pernicious motives, academia suddenly becomes the best possible cover for espionage and state-sanctioned sabotage designed to undermine security.

    The important thing is not to hunt for motives but to identify and quarantine the saboteurs to prevent further sabotage. Complaining to the University's research ethics board might help, because, regardless of intent, sabotage is still sabotage, and that is unethical.

  • The difference between:

    "Dear GK-H: I would like to have my students test the security of the kernel development process. Here is my first stab at a protocol, can we work on this?"

    and

    "We're going to see if we can introduce bugs into the Linux kernel, and probably tell them afterwards"

    is the difference between white-hat and black-hat.

    • It should probably be a private email to Linus Torvalds (or someone in his near chain of patch acceptance), that way some easy to scan for key can be introduced in all patches. Then the top levels can see what actually made it through review, and in turn figure out who isn't reviewing as well as they should.

      1 reply →

  • Auditability is at the core of its advantage over closed development.

    Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

    To adress your first critic: benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm. Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

    • > Auditability is at the core of its advantage over closed development.

      That's an assertion. A hypothesis is verified through observing the real world. You can do that in many ways, giving you different confidence levels in validity of the hypothesis. Research such as the one we're discussing here is one of the ways to produce evidence for or against this hypothesis.

      > Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

      It is if there's a review process. Auditability itself is really most interesting before a patch is accepted. Sure, it's nice if vulnerabilities are found eventually, but the longer that takes, the more likely it is they were already exploited. In case of an intentionally bad patch in particular, the window for reverting it before it does most of its damage is very small.

      In other words, the experiment wasn't testing the entire auditability hypothesis. Just the important part.

      > benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm

      Sure. But the project scope matters. Linux kernel isn't some random OSS library on Github. It's core infrastructure of the planet. Assumption of benevolence works as long as the interested community is small and has little interest in being evil. With infrastructure-level OSS projects, the interested community is very large and contains a lot of malicious actors.

      > Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

      I agree, and in my books, if a legitimate researcher gets banned for such "undercover" research, it's just the flip side of doing such experiment.

      4 replies →

> It's likely a university with professors that hate open source.

This is a ridiculous conclusion. I do agree with the kernel maintainers here, but there is no way to conclude that the researchers in question "hate open source", and certainly not that such an attitude is shared by the university at large.

  • Seems like a reasonable default assumption to me, until the people repeatedly attempting to sabotage the open source community condescend to -- you know -- stop doing it and then explain wtf they are thinking.

  • [Edit: they seem to truly love OSS. See child comments. Sorry for my erroneous judgement. It reminded too much of anti-opensource FUD, I'm probably having PTSD of that time...]

    I fixed my sentence.

    I still think that these professors, either genuinely or by lack of willingness, do not understand the mechanism by which free software warrants its greater quality compared to proprietary ones (which is a fact).

    They just remind me the good old days of FUD against open source by Microsoft and its minions...

At least in the university where I did my studies, each professor had their own way of thinking and you could not group them into any one basket.

> the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

That's not true at all. There are many internet-critical projects with tons of holes that are not found for decades, because nobody except the core team ever looks at the code. You have to actually write tests, do fuzzing, static/memory analysis, etc to find bugs/security holes. Most open source projects don't even have tests.

Assuming people are always looking for bugs in FOSS projects is like assuming people are always looking for code violations in skyscrapers, just because a lot of people walk around them.

> (whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)

Which is why there have never been multi-year critical security vulnerabilities in FOSS software.... right?

Sarcasm aside, because of how FOSS software is packaged on Linux we've seen critical security bugs introduced by package maintainers into software that didn't have them!

  • You need to compare what happens with vulnerabilities in OSS vs in proprietary.

    A maintainer pakage is just one more open source software (thus also in need of reviews and audits)... which is why some people prefer upstream-source-based distribs, such as Gentoo, Arch when you use git-based AUR packages, or LFS for the hardcore fans.

    • > You need to compare what happens with vulnerabilities in OSS vs in proprietary.

      Yes, you do need to make that comparison. Taking it as a given without analysis is the same as trusting the proprietary software vendors who claim to have robust QA on everything.

      Security is hard work and different from normal review. The number of people who hypothetically could do it is much greater than the number who actually do, especially if there isn’t an active effort to support that type of analysis.

      I’m not a huge fan of this professor’s research tactic but I would ask what the odds are that, say, an intelligence agency isn’t doing the same thing but with better concealment. Thinking about how to catch that without shutting down open-source contributions seems like an important problem.