Comment by ansible

4 years ago

I still don't get the point of this "research".

You're just testing the review ability of particular Linux kernel maintainers at a particular point in time. How does that generalize to the extent needed for it to be valid research on open source software development in general?

You would need to run this "experiment" hundreds or thousands of times across most major open source projects.

>the point of this "research".

I think it's mostly "finger pointing": you need one exception to break a rule. If the rule is "open source is more secure than closed source because community/auditing/etc.", now with a paper demonstrating that this rule is not always true you can write a nice Medium article for your closed-source product, quoting said paper, claiming that your closed-source product is more secure than the open competitor.

  • I don't think this is correct. The authors have contributed a large number of legitimate bugfixes to the kernel. I think they really did believe that process changes can make the kernel safer and that by doing this research they can encourage that change and make the community better.

    They were grossly wrong, of course. The work is extremely unethical. But I don't believe that their other actions are consistent with a "we hate OSS and want to prove it is bad" ethos.

The Linux kernel is one of the largest open-source projects in existence, so my guess is that they were aiming to show that "because the Linux kernel review process doesn't protect against these attacks, most open-source project will also be vulnerable" - "the best can't stop it, so neither will the rest".

  • But we have always known that someone with sufficient cleverness may be able to slip vulnerabilities past reviewers of whatever project.

    Exactly how clever? That varies from reviewer to reviewer.

    There will be large projects, with many people that review the code, which will not catch sufficiently clever vulnerabilities. There will be small projects with a single maintainer that will catch just about anything.

    There is a spectrum. Without conducting a wide-scale (and unethical) survey with a carefully calibrated scale of cleverness for vulnerabilities, I don't see how this is useful research.

    • > But we have always known that someone with sufficient cleverness may be able to slip vulnerabilities past reviewers of whatever project.

      ...which is why the interestingness of this project depends on how clever they were - which I'm not able to evaluate, but which someone would need to before they could possibly invalidate the idea.

      > (and unethical)

      How is security research unethical, exactly?

      1 reply →