Comment by kdbg
4 years ago
I don't think there have been any recent comments from anyone at U.Mn. So, back when the original research (happened last year) the following clarification was offered by Qiushi Wu and Kangjie Lu which atleast paints their research in somewhat better light: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
That said the current incident seems to have gone beyond the limits of that one and is a new incident. I just thought it would be fair to include their "side"
From their explanation:
(3). We send the incorrect minor patches to the Linux community through email to seek their feedback.
(4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
------------------------
But this shows a distinct lack of understanding of the problem:
> This is not ok, it is wasting our time, and we will have to report this,
> AGAIN, to your university...
------------------------
You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:
1. The voluntary consent of the human subject is absolutely essential.
Holy cow!! I'm a researcher and don't understand how they thought it would be okay to not do an IRB, and how an IRB would not catch this. The linked PDF by the parent post is quite illustrative. The first few paras seem to be downplaying the severity of what they did (did not introduce actual bugs into the kernel) but that is not the bloody problem. They experimented on people (maintainers) without consent and wasted their time (maybe other effects too .. e.g. making them vary of future commits from universities)! I'm appalled.
It's not _the_ problem, but it's an actual problem. If you follow the thread, it seems they did manage to get a few approved:
https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...
I agree this whole thing paints a really ugly picture, but it seems to validate the original concerns?
3 replies →
They did go to the UMN IRB per their paper and received a human subjects exempt waiver.
Edit: I am not defending the researchers who may have misled the IRB, or the IRB who likely have little understanding of what is actually happening
The irony is that the IRB process failed in the same way that the commit review process did. We're just missing the part where the researchers tell the IRB board they were wrong immediately after submitting their proposal for review.
IRB review: "Looks good!"
2 replies →
If you actually read the PDF linked in this thread:
* Is this human research? This is not considered human research. This project studies some issues with the patching process instead of individual behaviors, and we did not collect any personal information. We send the emails to the Linux community and seek community feedback. The study does not blame any maintainers but reveals issues in the process. The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained).
Do IRBs typically have a process by which you can file a complaint from outside the university? Maybe they never thought they would need to even check up on computer science faculty...
> You do not experiment on people without their consent.
Exactly this. Research involving human participants is supposed to have been approved by the University's Institutional Review Board; the kernel developers can complain to it: https://research.umn.edu/units/irb/about-us/contact-us
It would be interesting to see what these researches told the IRB they were doing (if they bothered).
Edited to add: From the link in GP: "The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained)"
Okay so this IRB needs to be educated about this. Probably someone in the kernel team should draft an open letter to them and get everyone to sign it (rather than everyone spamming the IRB contact form)
T
According to their website[0]:
> IRB exempt was issued
[0]: https://www-users.cs.umn.edu/~kjlu/
6 replies →
In any university I've ever been to, this would be a gross violation of ethics with very unpleasant consequences. Informed consent is crucial when conducting experiments.
If this behaviour is tolerated by the University of Minnesota (and it appears to be so) then I suppose that's another institution on my list of unreliable research.
I do wonder what the legal consequences are. Would knowingly and willfully introducing bad code constitute a form of vandalism?
>>>On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.
from Lu's list of publications at https://www-users.cs.umn.edu/~kjlu/
Seems like a conference presentation at IEEE at minimum?
5 replies →
IMNAL. In addition to possibly cause the research paper retracted due to the ethical violation, I think there are potentially civil or even criminal liability here. The US law on hacking is known to be quite vague (see Aaron Swartz’s case for example)
> You do not experiment on people without their consent.
Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?
From a common sense standpoint, it seems to me this is more about medical experiments. Yesterday I put some of my kids toys away without telling them to see if they’d notice and still play with them. I don’t think I need IRB approval.
IRB (as in Institutional Review Board) is a local (as in each research institution has one) regulatory board that ensures that any research conducted by people employed by the institution follows the federal government's common rule for human subject research. Most institutions receiving federal funding for research activities have to show that the funded work follows common rule guidelines for interaction with human subjects.
It is unlikely that a business conducting A/B testing or a parent interacting with their children are receiving federal funds to support it. Therefore, their work is not subject to IRB review.
Instead, if you are a researcher who is funded by federal funds (even if you are doing work on your own children), you have to receive IRB approval for any work involving human interaction before you start conducting it.
> wouldn’t every single A/B test done by a product team be considered unethical?
Potentially yes, actually.
I still think it should be possible to run some A/B tests, but a lot depends on the underlying motivation. The distance between such tests and malicious psychological manipulation can be very, very small.
> it seems to me this is more about medical experiments
Psychology and sociology are both subject to the IRB as well.
Regardless of their department, this feels like a psychology experiment.
2 replies →
> Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?
I would argue that ordinary A/B tests, by their very nature, are not "experiments" in the sense that restriction is intended for, so there is no reason for them to be considered unethical.
The difference between an A/B test and an actual experiment that should require the subjects' consent is that either of the test conditions, A or B, could have been implemented ordinarily as part of business as usual. In other words, neither A nor B by themselves would need a prior justification as to why they were deployed, and if the reasoning behind either of them was to be disclosed to the subjects, they would find them indistinguishable from any other business decision.
Of course, this argument would not apply if the A/B test involved any sort of artificial inconvenience (e.g. mock errors or delays) applied to either of the test conditions. I only mean A/B tests designed to compare features or behaviours which could both legitimately be considered beneficial, but the business is ignorant of which.
> Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?
Assuming this isn't being asked as a rhetorical question, I think that's exactly what turned the now infamous Facebook A/B test into a perceived unethical mass manipulation of human emotions. A lot of folks are now justifiably upset and skeptical of Facebook (and big tech) as a result.
So to answer your question: yes, if that test moves into territory that would feel like manipulation once the subject is aware of it. Maybe especially so because users are conceivably making a /choice/ to use said product and may switch to an alternative (or simply divest) if trust is lost.
It should be for all science done for the sake of science, not just medical work. When I did experiments that just involved people playing an existing video game I still had to get approval from IRB and warn people of all the risks that playing a game is associated with (like RSI, despite the gameplay lasting < 15 minutes).
Researchers at a company could arguably be deemed as engaging in unethical research and barred from contributing to the scientific community due to unethical behavior. Even doing experiments on your kids may be deemed crossing the line.
The question I have is when does it apply. If you research on your own kids but never publish, is it okay? Does the act of attempting to publish results retroactively make an experiment unethical? I'm not certain these things have been worked out because of how rare people try to publish anything that wasn't part of an official experiment.
It does seem rather unethical, but I must admit that I find the topic very interesting. They should definitely have asked for consent before starting with the "attack", but if they did manage to land security vulnerabilities despite the review process it's a very worrying result. And as far as I understand they did manage to do just that?
I think it shows that this type of study might well be needed, it just needs to be done better and with the consent of the maintainers.
“Hey, we are going to submit some patches that contain vulnerabilities. All right?”
If they do so, the maintainers become more vigilant and the experiment fails. But, the key to the experiment is that maintainers are not vigilant as they should be. It’s not an attack to the maintainers though, but to the process.
18 replies →
They apparently didn't consider this "human research"
As I understand it, any "experiment" involving other people that weren't explicitly informed of the experiment before hand needs to be a lot more carefully considered than what they did here.
Makes sense considering how open source people are treated.
In this post they say the patches come from a static analyser and they accuse the other person of slander for their criticisms
> I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.
> These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.
( https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... )
How does that fit in with your explanation?
>I sent patches on the hopes to get feedback
They did not say that they were hoping for feedback on their tool when they submitted the patch, they lied about their code doing something it does not.
>How does that fit in with your explanation?
It fits in the narrative of doing hypocritical changes to the project.
1 reply →
From GKH's response, which you linked:
> (3). We send the incorrect minor patches to the Linux community through email to seek their feedback.
Sounds like they knew exactly what they were doing.
It’s a lie, that’s how it fits.
> You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:
> 1. The voluntary consent of the human subject is absolutely essential.
The Nuremberg code is explicitly about medical research, so it doesn't apply here. More generally, I think that the magnitude of the intervention is also relevant, and that an absolutist demand for informed consent in all - including the most trivial - cases is quite silly.
Now, in this specific case I would agree that wasting people's time is an intervention that's big enough to warrant some scrutiny, but the black-and-white way of some people to phrase this really irks me.
PS: I think people in these kinds of debate tend to talk past one another, so let me try to illustrate where I'm coming from with an experiment I came across recently:
To study how the amount of tips waiters get changes in various circumstances, some psychologists conducted an experiment where the waiter would randomly either give the guests some chocolate with the bill, or not (control condition)[0] This is, of course, perfectly innocuous, but an absolutist claim about research ethics ("You do not experiment on people without their consent.") would make research like this impossible without any benefit.
[0] https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1559-1816...
But this is all a lie. If you read the linked thread you till see that they refused to admit to their experiment and even sent a new, differently broken patch.
Yeah, it is a bit disrespectful for kernel maintainers without gaining their approvals ahead of time.
Disrespecting some programmers on the internet is, while not nice, also not a high crime.
There is sometimes an exception for things like interviews when n is only a couple of people. This was clearly unethical and it’s certain that at least some of those involved knew that. It’s common knowledge universities.
I'm confused - how is this an experiment on humans? Which humans? As far as I can tell, this has nothing to do with humans, and everything to do with the open-source review process - and if one thinks that it counts as a human experiment because humans are involved, wouldn't that logic apply equally to pentesting?
For that matter, what's the difference between this and pentesting?
Penetration testing is only ethical when you are hired by the organization you are testing.
Also, IRB review is only for research funded by the federal government. If you’re testing your kid’s math abilities, you’re doing an experiment on humans, and you’re entirely responsible for determining whether this is ethical or not, and without the aid of an IRB as a second opinion.
Even then, successfully getting through the IRB process doesn’t guarantee that your study is ethical, only that it isn’t egregiously unethical. I suspect that if this researcher got IRB approval, then the IRB didn’t realize that these patches could end up in a released kernel. This would adversely affect the users of billions of Linux machines world–wide. Wasting half an hour of a reviewer’s time is not a concern by comparison.
Consent!
Usually when an organization is pen-tested it consented to being pen-tested (likely even requesting it).
Here there were no contact with the Linux foundation to gain consent for the experiment.
> indicating “looks good”
I wonder how many zero days have been included already, for example by nation state actors...
You could argue that they are doing the maintainers a favor. Bad actors could exploit this, and the researchers are showing that maintainers are not paying enough attention.
If I were at the receiving end, I’d think checking a patch multiple times before accepting it.
I'm sure that they thought this. But this is a bit like doing unsolicited pentests or breaking the locks on somebody's home at night without their permission. If people didn't ask for it and consent, it is unethical.
And further, pretty much everybody knows that malicious actors - if they tried hard enough - would be able to sneak through hard to find vulns.
> Bad actors could exploit this, and the researchers are showing that maintainers are not paying enough attention.
And this is anything new?
And if I blow a hammer over your head while you are not suspecting it, does this prove anything else than that I am thug? Does it help you? Honestly?
>You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:
>1. The voluntary consent of the human subject is absolutely essential.
Does this also apply to scrapping people's data?
> You do not experiment on people without their consent.
By this logic eg. resume callback studies aiming to study bias in the workforce would be impossible.
> This is in fact the very FIRST point of the Nuremberg code
Stretch Armstrong over here.
In the last year when it came to experimental Covid-19 projections, modeling and population-wide recommendations from major academic centers, the IRB's were silent and academics did essentially whatever they wanted, regardless of "consent" from the populations that were the subjects of their speculative hypotheses.
Meh, this means a lot of viral social experiments on Youtube violate the Nuremberg code...
Yes and?
This isn't a "gotcha" - people shouldn't do this.
2 replies →
> You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:
> 1. The voluntary consent of the human subject is absolutely essential.
Which is rather useless, as for many experiments to work, participants have to either be lied to, or kept in the dark as to the nature of the experiment, so whatever “consent” they give is not informed consent. They simply consent to “participate in an experiment” without being informed as to the qualities thereof so that they truly know what they are signing up for.
Of course, it's quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations, and they never consented to that, but the hospitals and physicians that partake in that are not sanctioned as it's “tradition”.
Know well that so-called “human rights” have always been, and shall always be, a show of air that lack substance.
> quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations
Fascinating. Can you provide links?
1 reply →
Their first suggestion to the process is pure gold:"OSS projects would be suggested to update the code of conduct, something like “By submitting the patch, I agree to not intend to introduce bugs”"
Like somebody picking your locks, and suggesting, 'to stop this one approach would be to post a sign "do not pick"'
The sign is to remind honest people that the lock is important, and we do not appreciate game playing here.
Honest people don’t see a lock and think, “Ok, they don’t want me going in there, but I bet they would appreciate some free pentesting.”
It is ok to put the sign. But not for the person who transgressed to suggest 'why dont you put a sign'
The fact that they took the feedback last time and decided "lets do more of that" is already a big red flag.
>>>On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.
from https://www-users.cs.umn.edu/~kjlu/
If the original research results in a paper and IEEE conference presentation, why not? There's no professional consequences for this conduct, apparently.
Given that this conference hasn't happened yet, there should still be time for the affected people to report the inappropriate conduct to the organizers and possibly get the paper pulled.
4 replies →
If this is actually presented, someone present should also make the following clear: "As a result of the methods used by the presenters, the entire University of Minnesota system has been banned from the kernel development process and the kernel developers have had to waste time going back and re-evaluating all past submissions from the university system. The kernel team would also like to advise other open-source projects to carefully review all UMN submissions in case these professors have simply moved on to other projects."
I just wanted to highlight that S&P/Oakland is one of the top 3 or 4 security conferences in the security community in academia. This is a prestigious venue lending its credibility to this paper.
1 reply →
The guy is still putting the blame on the Kernel project for not having a "don't submit bugs" clause.[1]
And insists it was not human research. [1]
How can this type of people be professors?
[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
This does paint there side better, but it also makes me wonder if they're being wrongly accused of this current round of patches? That clarification says that they only submitted 3 patches, and that they used a random email address when doing so (so presumably no @umn.edu).
These ~200 patches from UMN being reverted might have nothing to do with these researchers at all.
Hopefully someone from the university clarifies what's happening soon before the angry mob tries to eat the wrong people.
The study you’re quoting was a previous study by the same research group, from last year.
they are mentally retarded
END OF STATEMENT