Comment by md_
4 years ago
I'm confused. The cited paper contains this prominent section:
Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch.
Are you saying that despite this, these malicious commits made it to production?
Taking the authors at their word, it seems like the biggest ethical consideration here is that of potentially wasting the time of commit reviewers—which isn't nothing by any stretch, but is a far cry from introducing bugs in production.
Are the authors lying?
>Are you saying that despite this, these malicious commits made it to production?
Vulnerable commits reached stable trees as per the maintainers in the above email exchange, though the vulnerabilities may not have been released to users yet.
The researchers themselves acknowledge the patches were accepted in the above email exchange, so it's hard to believe that they're being honest or are fully aware of their ethics violations/vulnerability introductions and that they would've prevented the patches from being released without gkh's intervention.
Ah, I must've missed that. I do see people saying patches have reached stable trees, but the researchers' own email is missing (I assume removed) from the archive. Where did you find it?
It's deleted so I was going off of the quoted text in Greg's response that their patches were being submitted without any pretext of "don't let this reach stable".
I trust Greg to have not edited or misconstrued their response.
https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
2 replies →
The linked patch is pointless, but does not introduce a vulnerability.
Perhaps the researchers see no harm in letting that be released.
The linked one is harmless (well it introduces a race condition which is inherently harmful to leave in the code but I suppose for the sake of argument we can pretend that it can't lead to a vulnerability), but the maintainers mention vulnerabilities of various severity in other patches managing to reach stable. If they were not aware of the severity of their patches, then clearly they needed to be working with a maintainer(s) who is experienced with security vulnerabilities in a branch and would help prevent harmful patches from reaching stable.
It might be less intentionally harmful if we presume they didn't know other patches introduced vulnerabilities, but this is also why this research methodology is extremely reckless and frustrating to read about, when this could have been done with guard rails where they were needed without impacting the integrity of the results.
It seems that Greg K-H has now released a patch of "the easy reverts" of umn.edu commits... all 190 of them. https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
The final commit in the reverted list (d656fe49e33df48ee6bc19e871f5862f49895c9e) is originally from 2018-04-30.
EDIT: Not all of the 190 reverted commits are obviously malicious:
https://lore.kernel.org/lkml/20210421092919.2576ce8d@gandalf...
https://lore.kernel.org/lkml/20210421135533.GV8706@quack2.su...
https://lore.kernel.org/lkml/CAMpxmJXn9E7PfRKok7ZyTx0Y+P_q3b...
https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
What a mess these guys have caused.
They aren't lying, but their methods are still dangerous despite their implying the contrary. Their approach requires perfection on both the submitter and reviewer.
The submitter has to remember to send the "warning, don't apply patch" mail in the short time window between confirmation and merging. What happens if one of the students doing this work gets sick and misses some days of work, withdraws from the program, just completely forgets to send the mail?
What if the reviewer doesn't see the mail in time or it goes to spam?
GKH, in that email thread, did find commits that made it to production; most likely the authors just weren't following up very closely.
> Are the authors lying?
In short, yes. Every attempted defense of them has operated by taking their statements at face value. Every position against them has operated by showing the actual facts.
This may be shocking, but there are some people in this world who rely on other people naively believing their version of events, no matter how much it contradicts the rest of reality.
even if they didn't, they waste the community's time.
I think they are saying that it's possible that some code was branched and used elsewhere, or simply compiled into a running system by a user or developer.
Agreed on the time issue—as I noted above. I think it's still of a pretty different cost character to actually allowing malicious code to make it to production, but (as you note) it's hard to be sure that this would not make it to some non-standard branch, as well, so there are real risks in this approach.
Anyway, my point wasn't that this is free of ethical concerns, but it seems like they put _some_ thought into how to reduce the potential harm. I'm undecided if that's enough.
> I'm undecided if that's enough.
I don't think it's anywhere close to enough and I think their behavior is rightly considered reckless and unethical.
They should have contacted the leadership of the project to announce to maintainers that anonymous researchers may experiment on the contribution process, allowed maintainers to opt out, and worked with a separate maintainer with knowledge of the project to ensure harmful commits were tracked and reversions were applied before reaching stable branches.
Instead their lack of ethical considerations throughout this process has been disappointing and harmful to the scientific and open source communities, and go beyond the nature of the research itself by previously receiving an IRB exemption by classifying this as non-human research, and potentially misleading UMN on the subject matter and impact.
This is one of the commits that went live with "built-in bug" according to Leon:
https://github.com/torvalds/linux/commit/8e949363f017
I'm not convinced. Yes, there's a use after free (since fixed), but it's there before the patch too.
The particular patches being complained about seem to be subsequent work by someone on the team that wrote that paper, but submitted since the paper was published, ie, followup work.
'race conditions' like this one are inherently dangerous.