Comment by st_goliath
4 years ago
I tangentially followed the debacle unfold for a while and this particular thread now has lead to heated debates on some IRC channels I'm on.
While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.
Two messages down in the same thread, it more or less culminates with the university e-mail suffix being banned from several kernel mailing lists and associated patches being removed[1], which might be an appropriate response to discourage others from similar stunts "for science".
[1] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
I'm confused. The cited paper contains this prominent section:
Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch.
Are you saying that despite this, these malicious commits made it to production?
Taking the authors at their word, it seems like the biggest ethical consideration here is that of potentially wasting the time of commit reviewers—which isn't nothing by any stretch, but is a far cry from introducing bugs in production.
Are the authors lying?
>Are you saying that despite this, these malicious commits made it to production?
Vulnerable commits reached stable trees as per the maintainers in the above email exchange, though the vulnerabilities may not have been released to users yet.
The researchers themselves acknowledge the patches were accepted in the above email exchange, so it's hard to believe that they're being honest or are fully aware of their ethics violations/vulnerability introductions and that they would've prevented the patches from being released without gkh's intervention.
Ah, I must've missed that. I do see people saying patches have reached stable trees, but the researchers' own email is missing (I assume removed) from the archive. Where did you find it?
3 replies →
The linked patch is pointless, but does not introduce a vulnerability.
Perhaps the researchers see no harm in letting that be released.
1 reply →
It seems that Greg K-H has now released a patch of "the easy reverts" of umn.edu commits... all 190 of them. https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
The final commit in the reverted list (d656fe49e33df48ee6bc19e871f5862f49895c9e) is originally from 2018-04-30.
EDIT: Not all of the 190 reverted commits are obviously malicious:
https://lore.kernel.org/lkml/20210421092919.2576ce8d@gandalf...
https://lore.kernel.org/lkml/20210421135533.GV8706@quack2.su...
https://lore.kernel.org/lkml/CAMpxmJXn9E7PfRKok7ZyTx0Y+P_q3b...
https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
What a mess these guys have caused.
They aren't lying, but their methods are still dangerous despite their implying the contrary. Their approach requires perfection on both the submitter and reviewer.
The submitter has to remember to send the "warning, don't apply patch" mail in the short time window between confirmation and merging. What happens if one of the students doing this work gets sick and misses some days of work, withdraws from the program, just completely forgets to send the mail?
What if the reviewer doesn't see the mail in time or it goes to spam?
GKH, in that email thread, did find commits that made it to production; most likely the authors just weren't following up very closely.
> Are the authors lying?
In short, yes. Every attempted defense of them has operated by taking their statements at face value. Every position against them has operated by showing the actual facts.
This may be shocking, but there are some people in this world who rely on other people naively believing their version of events, no matter how much it contradicts the rest of reality.
even if they didn't, they waste the community's time.
I think they are saying that it's possible that some code was branched and used elsewhere, or simply compiled into a running system by a user or developer.
Agreed on the time issue—as I noted above. I think it's still of a pretty different cost character to actually allowing malicious code to make it to production, but (as you note) it's hard to be sure that this would not make it to some non-standard branch, as well, so there are real risks in this approach.
Anyway, my point wasn't that this is free of ethical concerns, but it seems like they put _some_ thought into how to reduce the potential harm. I'm undecided if that's enough.
1 reply →
This is one of the commits that went live with "built-in bug" according to Leon:
https://github.com/torvalds/linux/commit/8e949363f017
I'm not convinced. Yes, there's a use after free (since fixed), but it's there before the patch too.
The particular patches being complained about seem to be subsequent work by someone on the team that wrote that paper, but submitted since the paper was published, ie, followup work.
'race conditions' like this one are inherently dangerous.
> While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.
I agree. I would say this is kind of a "human process" analog of your typical computer security research, and that this behavior is akin to black hats exploiting a vulnerability. Totally not OK as research, and totally reckless!
Yep. To take a physical-world analogy: Would it be okay to try and prove the vulnerability of a country's water supply by intentionally introducing a "harmless" chemical into the treatment works, without the consent of the works owners? Or would that be a go directly to jail sort of an experiment?
I share the researchers' intellectual curiosity about whether this would work, but I don't see how a properly-informed ethics board could ever have passed it.
https://www.theonion.com/reporters-expose-airport-security-l...
The US navy did actually basically this with some pathogens in the 50s: https://en.wikipedia.org/wiki/Operation_Sea-Spray ; the idea of 'ethical oversight' was not something a lot of scientists operated under in those days.
> Would it be okay to try and prove the vulnerability of a country's water supply by intentionally introducing a "harmless" chemical into the treatment works, without the consent of the works owners?
The question should also be due to who's neglect they gained access to the "water supply". If you also truly want to make this comparison.
3 replies →
Out of interest, is there any way to have some sort of automated way to test this weak link that is human trust? (I understand how absurd this question is)
It's awfully scary to think about how vulnerabilities might be purposely introduced into this important code base (as well as many other) only to be exploited at a later date for an intended purpose.
Edit: NM, see st_goliath response below
https://news.ycombinator.com/item?id=26888538
I assume that having these go into production could make the authors "hackers" according to law, no?
Haven't whitehat hackers doing unsolicited pen-testing been prosecuted in the past?
Are there any measures being discussed that could make such attacks harder in future?
Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
The whole idea of the mailing list based submission process is that it allows others on the list to review your patch sets and point out obvious problems with your changes and discuss them, before the maintainer picks the patches up from the list (if they don't see any problem either).
As I pointed out elsewhere, there are already test farms and static analysis tools in place. On some MLs you might occasionally see auto generated mails that your patch set does not compile under configuration such-and-such, or that the static analysis bot found an issue. This is already a thing.
What happened here is basically a con in the patch review process. IRL con men can scam their marks, because most people assume, when they leave the house that the majority of the others outside aren't out there to get them. Except when they run into one where the assumption doesn't hold and end up parted from their money.
For the paper, bypassing the review step worked in some instances of the many patches they submitted because a) humans aren't perfect, b) have a mindset that most of the time, most people submitting bug fixes do so in good faith.
Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?
Yes, I know that this review process isn't perfect, that there are problems and I'm not trying to dismiss any concerns.
But what technical measure would you propose that can effectively stop con men?
> Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
Yes, especially for critical projects?
> Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?
I don’t jump to the conclusion that the random contributor is evil. I do however think about the potential impact of the submitted patch, security or not, and I do assume a random contributor can sneak in subtle bugs, usually not intentionally, but simply due to a lack of understanding.
1 reply →
> Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
I’m not a maintainer but naively I would have thought that the answer to this is “Yes”.
I didn’t mean any disrespect. I didn’t write “I can’t believe they haven’t implemented a perfect technical process that fully prevents these attacks”.
I just asked if there are any ideas being discussed.
Two things can be true at the same time: 1. What the “researchers” did was unethical. 2. They uncovered security flaws.
> Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
Do the game theory. If you do assume that, you'll always be wrong. But if you don't assume it, you won't always be right.
Force the university to take reponsibility for screening their researchers. i.e. a blanket ban, scorched earth approach punishing the entire university's reputation is a good start.
People want to claim these are lone rogue researchers and good people at the university shouldn't be punished, but this is the only way you can reign in these types of rogues individuals: by getting the collective reputation of the whole university on the line to police their own people. Every action of individual researchers must be assumed to be putting the reputation of the university as a whole on the line. This is the cost of letting individuals operate within the sphere of the university.
Harsh, "over reaction" punishment is the only solution.
The only real fix for this is to improve tooling and/or programming language design to make these kinds of exploits more difficult to slip past maintainers. Lots of folks are working in that space (see recent discussion around Rust), but it's only becoming a priority now that we're seeing the impact of decades of zero consideration for security. It'll take a while to steer this ship into the right direction, and in the meantime the world continues to turn.
The University and researchers involved are now default-banned from submitting.
So yes.
If they're public IRC channels, do you mind mentioning them here? I'm trying to find the remnant. :)