Comment by bluGill

4 years ago

If the IRB is any good the professor doesn't get that. Universities are publish or perish, and the IRB should force the withdrawal of all papers they submitted. This is might be enough to fire the professor with cause - including remove any tenure protection they might have - which means they get a bad reference.

I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time. (there is the possibility that these are good faith patches and someone in the linux community just hates this person - seems unlikely but until a proper independent investigation is done I'll leave that open.)

See page 9 of the already published paper:

https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i...

> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.

  • > The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.

    I'm not sure how it affects things, but I think it's important to clarify that they did not obtain the IRB-exempt letter in advance of doing the research, but after the ethically questionable actions had already been taken:

    The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. ... We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.

    https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

    • I'm a bit shocked that the IRB gave an exemption letter - are they hoping that the kernel maintainers won't take the (very reasonable) step towards legal action?

      6 replies →

  • > We send the emails to the Linux communityand seek their feedback.

    That's not really what they did.

    They sent the patches, the patches where either merged or rejected.

    And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose until they got caught and people started reverting all the patches from their university and banned the whole university.

    • This is not what happened according to them:

      https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

      > (4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

      17 replies →

    • > And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose...

      Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".

      Had this not been caught, it would've exposed a major flaw in the process.

      > ...until they got caught and people started reverting all the patches from their university and banned the whole university.

      Either these patches are valid fixes, in which case they should remain, or they are intentional vulnerabilities, in which case they should've already been reviewed and rejected.

      Reverting and reviewing them "at a later date" just makes me question the process. If they haven't been reviewed properly yet, it's better to do it now instead of messing around with reverts.

      11 replies →

  • The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.

    How is this not human research? They experimented on the reactions of people in a non-controlled environment.

    • For IRB human research means humans as subject in the research study. The subject of the study is the kernel patch review process. Yes, the review process does involve humans, but the humans (reviewers) are not the research subject. Not defending the study in anyway.

      22 replies →

  • This is exactly what I would have said: this sort of research isn't 'human subjects research' and therefore is not covered by an IRB (whose job it is to prevent the university from legal risk, not to identify ethically dubious studies).

    It is likely the professor involved here will be fired if they are pre-tenure, or sanctioned if post-tensure.

    • How in the world is conducting behavioral research on kernel maintainers to see how they respond to subtly-malicious patches not "human subject research"?

      13 replies →

    • This reminds me of a few passages in the SSC post on IRBs[0].

      Main point is that IRBs were created in response to some highly unethical and harmful "studies" being carried out by institutions thought of as top-tier. Now they are considered to be a mandatory part of carrying out ethical research. But if you think about it, isn't outsourcing all sense of ethics to an organization external to the actual researchers kind of the opposite of what we want to do?

      All institutions tend to be corruptible. Many tend to respond to their actual incentives rather than high-minded statements about what they're supposed to be about. Seems to me that promoting the attitude of "well an IRB approved it, so it must be all right, let's go!" is the exact opposite of what we really want.

      All things considered, it's probably better to have something there than nothing. But you still have to be responsible for your own decisions. I bamboozled our lazy IRB into approving our study, so I'm not responsible for it being obviously a bad idea, just isn't good enough.

      If you think about it, it's actually kind of meta to the code review process they were "studying". Just like IRBs, Code review is good, but no code review process will ever be good enough to stop every malicious actor every time. It will always be necessary to track the reputation of contributors and be able to mass-revert contributions from contributors later determined to be actively malicious.

      [0] https://slatestarcodex.com/2017/08/29/my-irb-nightmare/

      1 reply →

  • Communities aren’t people? What in the actual fuck is going on with this university’s IRB?!

    • They weren't studying the community, they were studying the patching process used by that community, which a normal IRB would and should consider to be research on a process and therefore not human Research. That's how they presented it to the IRB so it got passed even if what they were claiming was clearly bullshit.

      This research had the potential to cause harm to people despite not being human research and was therefore ethically questionable at best. Because they presented the research as not posing potential harm to real people that means they lied to the IRB, which is grounds for dismissal and potential discreditation of all participants (their post-graduate degrees could be revoked by their original school or simply treated as invalid by the educational community at large). Discreditation is unlikely, but loss of tenure for something like this is not out of the question, which would effectively end the professor's career anyway.

      8 replies →

    • In my experience in university research, the correct portrayal of the ethical impact is the burden of the researchers unfortunately, and the most plausible explanation in my view given their lack of documentation of the request for IRB exemption would be that they misconstrued the impact of the research.

      It seems very possible to me that an IRB wouldn't have accepted their proposed methodology if they hadn't received an exemption.

  • > The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

    Is there anyone on hand who could explain how what looks very much like a social engineering attack is not "human research"?

This is, at the very least, worth an investigation from an ethics committee.

First of all, this is completely irresponsible, what if the patches would've made their way into a real-life device? The paper does mention a process through which they tried to ensure that doesn't happen, but it's pretty finicky. It's one missed email or one bad timezone mismatch away from releasing the kraken.

Then playing the slander victim card is outright stupid, it hurts the credibility of actual victims.

The mandate of IRBs in the US is pretty weird but the debate about whether this was "human subject research" or not is silly, there are many other ethical and legal requirements to academic research besides Title 45.

  • > there are many other ethical and legal requirements to academic research besides Title 45.

    Right. It's not just human subjects research. IRBs vet all kinds of research: polling, surveys, animal subjects research, genetics/embryo research (potentially even if not human/mammal), anything which could be remotely interpreted as ethically marginal.

    • If we took the case into the real world and it became "we decided to research how many supports we could remove from this major road bridge before someone noticed", I'd hope the IRB wouldn't just write it off as "not human research so we don't care".

  • I agree. I personally don't care if it meets the official definition of human subject research. It was unethical, regardless of whether it met the definition or not. I think the ban is appropriate and wouldn't lose any sleep if the ban also enacted by other open-source projects and communities.

    It's a real shame because the university probably has good, experienced people who could contribute to various OSS projects. But how can you trust any of them when the next guy might also be running an IRB exempt security study.

  • >It's one missed email or one bad timezone mismatch away from releasing the kraken.

    I don't think code commits to the Linux kernel make it to live systems that fast?

    I do agree with the sentiment, though. It's grossly irresponsible to do that without asking at least someone in the kernel developer's group. People don't dig being used as lab rats, and now the whole uni is blocked. Well, tough shit.

    • No, but they're very high-traffic and if the "this was a deliberately bad patch" message is sent off-list, only to the maintainer, things can go south pretty easily. Off-list messages are easy to miss on inboxes whose email is in MAINTAINERS and receive a lot of spam, you can email someone right as they're going on vacation and so on. That's one of the reasons why a lot of development happens on a mailing list.

> I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time

That'd be great, yup. And the linux kernel team should then strongly consider undoing the blanket ban, but not until this investigation occurs.

Interestingly, if all that happens, that _would_ be an intriguing data point in research on how FOSS teams deal with malicious intent, heh.

  • Personally, I think their data points should include "...and we had to explain ourselves to the FBI."

What about IEEE and the peer reviewers who didn't object to their publications?

I think the real problem is rooted more fundamentally in academia than it seems. And I think it has mostly to do with a lack of ethics!

I'm amazed this passed IRB. Consider the analogy:

We presented students with an education protocol designed to make a blind subset of them fail tests. Then measured if they failed the test to see if they independently learned the true meaning of the information.

Under any sane IRB you would need consent of the students. This is failure on so many levels.

(edit to fix typo)