Comment by toxik

4 years ago

The problem here is really that they’re wasting time of the maintainers without their approval. Any ethics board would require prior consent to this. It wouldn’t even be hard to do.

> The problem here is really that they’re wasting time of the maintainers without their approval.

Not only that, but they are also doing experiments on a community of people which is against their interest and also could be harmful by creating mistrust. Trust is a big issue, without it it is almost impossible for people to work meaningfully together.

  • Yeah this actually seems more like sociological research except since it’s in the comp sci department the investigators don’t seem to be trained in acceptable (and legal) standards of conducting such research on human subjects. You definitely need prior consent when doing this sort of thing. Ideally this would be escalated to a research ethics committee at UMN because these researchers need to be trained in acceptable practices when dealing with human subjects. So to me it makes sense the subjects “opted out” and escalated to the university.

    • Already cited in another comment:

      > We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.

      So they did think of that. Either they misconstrued their research or the IRB messed up. Either way, they can now see for themselves exactly how human a pissed off maintainer is.

      2 replies →

  • Besides that, if their "research" patch gets into a release, it could potentially put thousands or millions of users at risk.

1) They identified vulnerabilities with a process 2) They contributed the correct code after showing the maintainer the security vulnerability they missed. 3) Getting the consent of the people behind the process would invalidate the results.

  • Go hack a random organization without a vulnerability disclosure program in place and see how much goodwill you have. There is a very established best practice in how to do responsible disclosure and this is far from it.

    • Also by and large reputation is a good first step in a security process.

      While any USB stick might have malware on it if it's ever been out of your sight, that one you found in the parking lot is a much bigger problem.

  • > 3) Getting the consent of the people behind the process would invalidate the results.

    This has not been a valid excuse since the 1950s. Scientists are not allowed to ignore basic ethics because they want to discover something. Deliberately introducing bugs into any open source project is plainly unethical; doing so in the Linux kernel is borderline malicious.

    • We should ban A/B testing then. Google didn’t tell me they were using me to understand which link color is more profitable for them.

      There are experiments and experiments. Apart from the fact that they provided the fix right away, they didn’t do anyone harm.

      And, by the way, it’s their job. Maintainers must approve patches after they ensured that the patch is fine. It’s okay to do mistakes, but don’t tell me “you’re wasting my time” after I showed you that maybe there’s something wrong with the process. If anything, you should thank me and review the process.

      If your excuse is “you knew the patch was vulnerable”, then how are you going to defend the project from bad actors?

      4 replies →

    • No bugs were introduced and they didn't intend to introduce any bugs. infact, they have resolved over 1000+ bugs in the linux kernel.

      >> https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc.... "We did not introduce or intend to introduce any bug or vulnerability in the Linux kernel. All the bug-introducing patches stayed only in the email exchanges, without being adopted or merged into any Linux branch, which was explicitly confirmed by maintainers. Therefore, the bug-introducing patches in the email did not even become a Git commit in any Linux branch. None of the Linux users would be affected. The following shows the specific procedure of the experiment"

      6 replies →

  • You're right, and it is depressing how negative the reaction has been here. This work is the technical equivalent of "Sokalling", and it is a good and necessary thing.

    The thing that people should be upset about is that such an important open source project so easily accepts patches which introduce security vulnerabilities. Forget the researchers for a moment - if it is this easy, you can be certain that malicious actors are also doing it. The only difference is that they are not then disclosing that they have done so!

    The Linux maintainers should be grateful that researchers are doing this, and researchers should be doing it to every significant open source project.

    • > The thing that people should be upset about is that such an important open source project so easily accepts patches which introduce security vulnerabilities

      They were trusting of contributors to not be malicious, and in particular, were trusting of a university to not be wholly malicious.

      Sure, there is a possible threat model where they would need to be suspicious of entire universities.

      But in general, human projects will operate under some level of basic trust, with some sort of means to establish that trust. To be able to actually get anything done; you cannot perfectly formally review everything with finite human resources. I don't see where they went wrong with any of that here.

      There's also the very simple fact that responding to an incident is also a part of the security process, and broadly banning a group whole-cloth will be more secure than not. So both them and you are getting what you want it of it - more of the process to research, and more security.

      If the changes didn't make it out to production systems, then it seems like the process worked? Even if some of it was due to admissions that would not happen with truly malicious actors, so too were the patches accepted because the actors were reasonably trusted.

      2 replies →