Comment by duncaen
4 years ago
See page 9 of the already published paper:
https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i...
> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.
> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.
I'm not sure how it affects things, but I think it's important to clarify that they did not obtain the IRB-exempt letter in advance of doing the research, but after the ethically questionable actions had already been taken:
The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. ... We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
I'm a bit shocked that the IRB gave an exemption letter - are they hoping that the kernel maintainers won't take the (very reasonable) step towards legal action?
What "legal action" do you think applies here?
4 replies →
I'd guess they may not have understood what actually happened, or were leaning heavily on the IEEE reviewers having no issues with the paper, as at that point it'd already been excepted.
> We send the emails to the Linux communityand seek their feedback.
That's not really what they did.
They sent the patches, the patches where either merged or rejected.
And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose until they got caught and people started reverting all the patches from their university and banned the whole university.
This is not what happened according to them:
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
> (4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
It'd be great if they pointed to those "please don't merge" messages on the mailing list or anywhere.
Seems like there are some patches already on stable trees [1], so they're either lying, or they didn't care if those "don't merge" messages made anybody react to them.
1 - https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...
9 replies →
I was more ambivalent about their "research" until I read that "clarification." It's weaselly bullshit.
>> The work taints the relationship between academia and industry
> We are very sorry to hear this concern. This is really not what we expected, and we strongly believe it is caused by misunderstandings
Yeah, misunderstandings by the university that anyone, ever, in any line of endeavor would be happy to be purposely fucked with as long as the perpetrator eventually claims it's for a good cause. In this case the cause isn't even good, they're proving the jaw-droppingly obvious.
5 replies →
This is zero percent different from a bad actor and hopefully criminal. I think a lot of maintainers work for large corporations like Microsoft, Oracle, Ubuntu, Red Hat, etc... I think these guys really stepped in it.
> And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose...
Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".
Had this not been caught, it would've exposed a major flaw in the process.
> ...until they got caught and people started reverting all the patches from their university and banned the whole university.
Either these patches are valid fixes, in which case they should remain, or they are intentional vulnerabilities, in which case they should've already been reviewed and rejected.
Reverting and reviewing them "at a later date" just makes me question the process. If they haven't been reviewed properly yet, it's better to do it now instead of messing around with reverts.
This reminds me of that story about Go Daddy sending everyone "training phishing emails" announcing that they had received a company bonus - with the explanation that this is ok because it is a realistic pretext that real phishing may use.
While true, it's simply not acceptable to abuse trust in this way. It causes real emotional harm to real humans, and while it also may produce some benefits, those do not outweigh the harms. Just because malicious actors don't care about the harms shouldn't mean that ethical people shouldn't either.
8 replies →
> Yes, that's the whole point!
Well, in real life, you can't go punch someone in the face to teach them a "point". If you do so, you'll get punished.
> Reverting and reviewing them "at a later date" just makes me question the process.
I don't think anybody realistically thought that the kernel review process is rock solid against malicious anyway. What exactly does the paper expose?
> Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".
This just turns the researchers into black hats. They are just making it look like "a research paper."
The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.
How is this not human research? They experimented on the reactions of people in a non-controlled environment.
Sounds like the IRB of UMn needs some scrutiny as well.
For IRB human research means humans as subject in the research study. The subject of the study is the kernel patch review process. Yes, the review process does involve humans, but the humans (reviewers) are not the research subject. Not defending the study in anyway.
> Yes, the review process does involve humans
It doesn’t just “involve humans” it is first and foremost the behavior of specific humans.
> but the humans (reviewers) are not the research subject.
The study is exactly studying their behavior in a particular context. They are absolutely the subjects.
21 replies →
This is exactly what I would have said: this sort of research isn't 'human subjects research' and therefore is not covered by an IRB (whose job it is to prevent the university from legal risk, not to identify ethically dubious studies).
It is likely the professor involved here will be fired if they are pre-tenure, or sanctioned if post-tensure.
How in the world is conducting behavioral research on kernel maintainers to see how they respond to subtly-malicious patches not "human subject research"?
In the restricted sense of Title 45, Part 46, it's probably not quite human subject research (see https://www.hhs.gov/ohrp/regulations-and-policy/regulations/... ).
Of course, there are other ethical and legal requirements that you're bound to, not just this one. I'm not sure which requirements IRBs in the US look into though, it's a pretty murky situation.
7 replies →
are you expecting that science and institutions are rational? If I was on the IRB, I wouldn't have considered this since it's not a sociological experiment on kernel maintainers, it's an experiment to inject vulnerabilities in a source code. That's not what IRBs are qualified to evaluate.
4 replies →
This reminds me of a few passages in the SSC post on IRBs[0].
Main point is that IRBs were created in response to some highly unethical and harmful "studies" being carried out by institutions thought of as top-tier. Now they are considered to be a mandatory part of carrying out ethical research. But if you think about it, isn't outsourcing all sense of ethics to an organization external to the actual researchers kind of the opposite of what we want to do?
All institutions tend to be corruptible. Many tend to respond to their actual incentives rather than high-minded statements about what they're supposed to be about. Seems to me that promoting the attitude of "well an IRB approved it, so it must be all right, let's go!" is the exact opposite of what we really want.
All things considered, it's probably better to have something there than nothing. But you still have to be responsible for your own decisions. I bamboozled our lazy IRB into approving our study, so I'm not responsible for it being obviously a bad idea, just isn't good enough.
If you think about it, it's actually kind of meta to the code review process they were "studying". Just like IRBs, Code review is good, but no code review process will ever be good enough to stop every malicious actor every time. It will always be necessary to track the reputation of contributors and be able to mass-revert contributions from contributors later determined to be actively malicious.
[0] https://slatestarcodex.com/2017/08/29/my-irb-nightmare/
I guess I have a different perspective. I know a fair number of world class scientists; like, the sort of people you end up reading about as having changed the textbook. One of these people, a well-known bacteriologist, brought his intended study to the IRB for his institution (UC Boulder), who said he couldn't do it because of various risks due to studying pathogenic bacteria. The bacteriologist, who knew far more about the science than the IRB, explained everything in extreme detail and batted away each attempt to shut him down.
Eventually, the IRB, unhappy at his behavior, said he couldn't do the experiment. He left for another institution (UC San Diego) immediately, having made a deal with the new dean to go through expedited review. It was a big loss for Boulder and TBH, the IRB's reasoning was not sound.
Communities aren’t people? What in the actual fuck is going on with this university’s IRB?!
They weren't studying the community, they were studying the patching process used by that community, which a normal IRB would and should consider to be research on a process and therefore not human Research. That's how they presented it to the IRB so it got passed even if what they were claiming was clearly bullshit.
This research had the potential to cause harm to people despite not being human research and was therefore ethically questionable at best. Because they presented the research as not posing potential harm to real people that means they lied to the IRB, which is grounds for dismissal and potential discreditation of all participants (their post-graduate degrees could be revoked by their original school or simply treated as invalid by the educational community at large). Discreditation is unlikely, but loss of tenure for something like this is not out of the question, which would effectively end the professor's career anyway.
> This research had the potential to cause harm to people
I don't buy it, and you fail to back that claim up at all.
7 replies →
In my experience in university research, the correct portrayal of the ethical impact is the burden of the researchers unfortunately, and the most plausible explanation in my view given their lack of documentation of the request for IRB exemption would be that they misconstrued the impact of the research.
It seems very possible to me that an IRB wouldn't have accepted their proposed methodology if they hadn't received an exemption.
> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.
Is there anyone on hand who could explain how what looks very much like a social engineering attack is not "human research"?