Comment by InsomniacL

4 years ago

1) They identified vulnerabilities with a process 2) They contributed the correct code after showing the maintainer the security vulnerability they missed. 3) Getting the consent of the people behind the process would invalidate the results.

Go hack a random organization without a vulnerability disclosure program in place and see how much goodwill you have. There is a very established best practice in how to do responsible disclosure and this is far from it.

  • Also by and large reputation is a good first step in a security process.

    While any USB stick might have malware on it if it's ever been out of your sight, that one you found in the parking lot is a much bigger problem.

  • Propose a way to test this without invalidating the results.

    • 1) Contact a single maintainer and explore feasibility of the study 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it 3) Orchestrate it so that someone outside of the knowledge group approves one or more of these patches 4) Interfere before any further damage is done

      Besides, are you arguing that ends justify the means if the intent behind the research is valid?

      14 replies →

    • In every commercial pentest I have been in, you have 1-2 usually senior employees on the blue team in the know. They have the job to stop employees from going to far on defense, as well as stop the pentesters from going too far. The rest of the team stays in the dark to test their response and observation.

      In this case, in my opinion, a small set of maintainers and linus as "management" would have to be in the know to e.g. stop a merge of such a patch once it was accepted by someone in the dark.

    • There doesn't have to be a way.

      Kernel maintainers are volunteering their time and effort to make Linux better, not to be entertaining test subjects for the researchers.

      Even if there is no ethical violation, they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.

      2 replies →

    • If you can’t make an experiment without violating ethical standards, you simply don’t do it, you can’t use this as an excuse to violate ethical standards.

      1 reply →

> 3) Getting the consent of the people behind the process would invalidate the results.

This has not been a valid excuse since the 1950s. Scientists are not allowed to ignore basic ethics because they want to discover something. Deliberately introducing bugs into any open source project is plainly unethical; doing so in the Linux kernel is borderline malicious.

  • We should ban A/B testing then. Google didn’t tell me they were using me to understand which link color is more profitable for them.

    There are experiments and experiments. Apart from the fact that they provided the fix right away, they didn’t do anyone harm.

    And, by the way, it’s their job. Maintainers must approve patches after they ensured that the patch is fine. It’s okay to do mistakes, but don’t tell me “you’re wasting my time” after I showed you that maybe there’s something wrong with the process. If anything, you should thank me and review the process.

    If your excuse is “you knew the patch was vulnerable”, then how are you going to defend the project from bad actors?

    • > they didn’t do anyone harm.

      Several of the patches are claimed to have landed in stable. Also, distributions and others (like the grsecurity people) pick up lkml patches that are not included in stable but might have security benefits. So even just publishing such a patch is harmful. Also, fixes were only provided to the maintainers privately as it seems, and unsuccessfully. Or not at all.

      > If your excuse is “you knew the patch was vulnerable”, then how are you going to defend the project from bad actors?

      Exactly the same way as without that "research".

      If you try to pry open my car door, I'll drag you to the next police station. "I'm just researching the security of car doors" won't help you.

    • Actually, I think participants in an A/B test should be informed of it.

      I think people should be informed when market research is being done on them.

      For situations where they are already invested in the situation, it should be optional.

      For other situations, such as new customer acquisition, the person would have the option of simply leaving the site to avoid it.

      But either way, they should be informed.

    • > We should ban A/B testing then. Google didn’t tell me they were using me to understand which link color is more profitable for them.

      Yes please.

  • No bugs were introduced and they didn't intend to introduce any bugs. infact, they have resolved over 1000+ bugs in the linux kernel.

    >> https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc.... "We did not introduce or intend to introduce any bug or vulnerability in the Linux kernel. All the bug-introducing patches stayed only in the email exchanges, without being adopted or merged into any Linux branch, which was explicitly confirmed by maintainers. Therefore, the bug-introducing patches in the email did not even become a Git commit in any Linux branch. None of the Linux users would be affected. The following shows the specific procedure of the experiment"

You're right, and it is depressing how negative the reaction has been here. This work is the technical equivalent of "Sokalling", and it is a good and necessary thing.

The thing that people should be upset about is that such an important open source project so easily accepts patches which introduce security vulnerabilities. Forget the researchers for a moment - if it is this easy, you can be certain that malicious actors are also doing it. The only difference is that they are not then disclosing that they have done so!

The Linux maintainers should be grateful that researchers are doing this, and researchers should be doing it to every significant open source project.

  • > The thing that people should be upset about is that such an important open source project so easily accepts patches which introduce security vulnerabilities

    They were trusting of contributors to not be malicious, and in particular, were trusting of a university to not be wholly malicious.

    Sure, there is a possible threat model where they would need to be suspicious of entire universities.

    But in general, human projects will operate under some level of basic trust, with some sort of means to establish that trust. To be able to actually get anything done; you cannot perfectly formally review everything with finite human resources. I don't see where they went wrong with any of that here.

    There's also the very simple fact that responding to an incident is also a part of the security process, and broadly banning a group whole-cloth will be more secure than not. So both them and you are getting what you want it of it - more of the process to research, and more security.

    If the changes didn't make it out to production systems, then it seems like the process worked? Even if some of it was due to admissions that would not happen with truly malicious actors, so too were the patches accepted because the actors were reasonably trusted.

    • The Linux project absolutely cannot trust contributors to not be malicious. If they are doing that, then this work has successfully exposed a risk.

      1 reply →

Getting specific consent from the project leads is entirely doable, and would have avoided most of the concerns.

  • It really wouldn't have and would've made the patches not pass all levels of review.

    • How do you think social engineering audits work? You first coordinate with the top layer (in private, of course) and only after getting their agreement do you start your tests. This isn't any different.

      1 reply →