Comment by tcelvis
4 years ago
Putting the ethical question of the researcher aside, the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.
Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.
> This "I will ban you and revert all your stuff" retaliation seems emotional overaction.
Fool me once. Why should they waste their time with extra scrutiny next time? Somebody deliberately misled them, so that's it, banned from the playground. It's just a no-nonsense attitude, without which you'd get nothing done.
If you had a party in your house and some guest you don't know and whom you invited in assuming good faith, turned out to deliberately poop on the rug in your spare guest room while nobody was looking .. next time you have a party, what do you do? Let them in but keep an eye on them? Ask your friends to never let this guest alone? Or just simply to deny entrance, so that you can focus on having fun with people you trust and newcomers who have not shown any malicious intent?
I know what I'd do. Life is too short for BS.
> Why should they waste their time with extra scrutiny next time?
Because well funded malicious actors (government agencies, large corporations, etc) exist and aren't so polite as to use email addresses that conveniently link different individuals from the group together. Such actors don't publicize their results, aren't subject to IRB approval, and their exploits likely don't have such benign end goals.
As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.
we don't have the full communication and I understand that the intention is to be stealthy (why use an university email that can be linked to the previous research then?). However the researcher's response seems to be disingenuous:
> I sent patches on the hopes to get feedback. We are not experts in the Linux kernel and repeatedly making these statements is disgusting to hear.
this is after they're caught, why continue lying instead of apologizing and explain? Is the lying also part of the experiments?
On top of that, they played cards, you can see why people would be triggered by this level of dishonesty:
> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies
2 replies →
They should not have experimented on human subjects without consent, regardless of whether the result is considered benign.
Yes, malicious actors have a head start, because they don't care about the rules. It doesn't mean that we should all kick the rules, and compete with malicious actors on this race to the bottom.
4 replies →
> As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.
This "attack" did not reveal anything interesting. It's not like any of this was unknown. Of course you can get backdoors in if you try hard enough. That does not surprise anybody.
Imagine somebody goes with an axe, breaks your garage door, poops on your Harley, leaves, and then calls you and tells you "Oh, btw, it was me. I did you a service by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of your property. Thank me later." And then they expect you to get let in when you have a party.
It doesn't work that way. Of course the garage door can be broken with an axe. You don't need a "mildly sophisticated attack" to illustrate that while wasting everybody's time.
You’re completely right, except in this case it’s banning anyone who happened to live in the same house as the offender, at any point in time...
By keeping the paper, UMN is benefiting (in citations and research result count). Universities are supposed to have processes for punishing unethical research. Unless the University retracts the paper and fires the researcher involved, they have not made amends.
IP bans often result in banning an entire house.
"It was my brother on my unsecured computer" is an excuse I've heard a few times by people trying to shirk responsibility for their ban-worthy actions.
Geographic proximity to bad actors is sometimes enough to get caught in the crossfire. While it might be unfair, it might also be seen as holding a community and it's leadership responsible for failing to hold members of their community responsible and in check with their actions. And, fair or not, it might also be seen as a pragmatic option in the face of limited moderation tools and time. If you have a magic wand to ban only the bad-faith contributions by the students influenced by the professor in question, I imagine the kernel devs will be more than happy to put it to use.
Is it really just the one professor, though?
1 reply →
No, it's not. It's banning anyone who hides behind their UMN email address. Because its been proving now the UMN.edu commits have bad actors.
To continue the analogy, it would be like finding out that the offender’s friends knew they were going to do that and were planning on recording the results. Banning all involved parties is reasonable.
1 reply →
Sounds more or less like the way punishment is handled in modern society.
The fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.
Basically, yes. The kernel review process does not catch 100% of intentionally introduced security flaws. It isn't perfect, and I don't think anyone is claiming that it is perfect. Whenever there's an indication that a group has been intentionally introducing security flaws, it is just common sense to go back and put a higher bar on reviewing it for security.
Not all kernel reviewers are being paid by their employer to review patches. Kernel reviews are "free" to the contributor because everyone operates on the assumption that every contributor wants to make Linux better by contributing high-quality patches. In this case, multiple people from the University have decided that reviewers' time isn't valuable (so it's acceptable to waste it) and that the quality of the Kernel isn't important (so it's acceptable to make it worse on purpose). A ban is a completely appropriate response to this, and reverting until you can review all the commits is an appropriate safety measure.
Whether or not this indicates flaws in the review process is a separate issue, but I don't know how you can justify not reverting all the commits. It'd be highly irresponsible to leave them in.
I guess what I am trying to get at is that this researcher's action does have its merit. This event does rise awareness of what sophisticated attacker group might try to do to kernel community. Admitting this would be the first step to hardening the kernel review process to prevent this kind of harm from happening again.
What I strongly disapprove of the researcher is that apparently no steps are taken to prevent real world consequences of malicious patches getting into kernel, I think the researcher should:
- Notify the kernel community promptly once malicious patches got past all review processes.
- Time these actions well such that malicious patches won't not get into a stable branch before they could be reverted.
----------------
Edit: reading the paper provided above, it seems that they did do both actions above. From the paper:
> Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.
So, unless the kernel maintenance team have another side of the story. The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.
That paper came out a year ago, and they got a lot of negative feedback about it, as you might expect. Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.
This time two reviewers noticed that the patch was useless, and then Greg stepped in (three weeks later) saying that this was a repetition of the same bad behavior from the first study. This got a response from the author of the patch, who said that this and other statements were “wild accusations that are bordering on slander”.
https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
> Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.
I'd hate to be the PhD student that wastes away half a dozen years of his/her life writing a document on how to sneak buggy code through a code review.
More than being pointless and boring, it's a total CV black hole. It's the worst of both worlds: zero professional experience to show for, and zero academic portfolio to show for.
1 reply →
We threw people off buildings to gauge how they would react, but were able to catch all 3 subjects in a net before they hit the ground.
Just because their actions didn’t cause damage doesn’t mean they weren’t negligent.
Strangers submitting patches to the kernel is completely normal, where throwing people off is not. A better analogy would involve decades of examples of bad actors throwing people off the bridge, then being surprised when someone who appears friendly does it.
4 replies →
We damaged the brake cables mechanics were installing into people's cars to find out if they were really inspecting them properly prior to installation!
To add... Ideally, they should have looped in Linus, or someone high-up in the chain of maintainers before running an experiment like this. Their actions might have been in good faith, but the approach they undertook (including the email claiming slander) is seriously irresponsible and a sure shot way to wreck relations.
Greg KH is "someone high-up in the chain." I remember submitting patches to him over 20 years ago. He is one of Linus's trusted few.
1 reply →
> This event does rise awareness of what sophisticated attacker group might try to do to kernel community.
The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)
> The limits of code review are quite well known
That's factually incorrect. The arguments over what constitutes proper code reviews continues to this day with few comprehensive studies about syntax, much less code reviews - not "do you have them" or "how many people" but methodology.
> it appears very questionable what scientific knowledge is actually gained here
The knowledge isn't from the study existing, but the analysis of the data collected.
I'm not even sure why people are upset at this, since it's a very modern approach to investigating how many projects are structured to this day. This was a daring and practical effort.
> The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.
Under that logic, it's ok for me to run a pen test against your computers, right? ...because I'm only wasting your time.... Or maybe to hack your bank account, but return the money before you notice.
Slippery slope, my friend.
Ethics aside, warning someone that a targeted penetration test is coming will change their behavior.
> Under that logic, it's ok for me to run a pen test against your computers, right?
I think the standard for an individual user should be different than that for the organization who is, in the end, responsible for the security of millions of those individual users. One annoys one person, one prevents millions from being annoyed.
Donate to your open source projects!
2 replies →
Does experimenting on people without their knowledge or consent pose an ethical question?
Obviously.
1 reply →
I wouldn't put it past them to have a second unpublished paper, for the "we didn't get caught" timeline.
It would give the University some notoriety to be able to claim "We introduced vulnerabilities in Linux". It would put them in good terms with possible propietary software sponsors, and the military.
> the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.
I don't think this necessarily follows. Rather it is fundamentally a resource allocation issue.
The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated. Instead it's a more nebulous standard of "reasonable assurance", where "reasonable" is a variable function of what must be sacrificed to perform a more thorough review, how critical the patch appears at first impression, and information relating to provenance of the patch.
By assimilating new information about the provenance of the patch (that it's coming from a group of people known to add obfuscated bugs), that standard rises, as it should.
Alternatively stated, there is some desired probability that an approved patch is bug-free (or at least free of any bugs that would threaten security). Presumably, the review process applied to a patch from an anonymous source (meaning the process you are implying suffers from a lack of confidence) is sufficient such that the Bayesian prior for a hypothetical "average anonymous" reviewed patch reaches the desired probability. But the provenance raises the likelihood that the source is malicious, which drops the probability such that the typical review for an untrusted source is not sufficient, and so a "proper review" is warranted.
> it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
That's hard to argue with, and ironically the point of the research at issue. It does imply that there's a need for some kind of "trust network" or interpersonal vetting to take the load off of code review.
> The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated.
Nobody can assure that.
In a perfect world, I would agree that the work of a researcher who's not an established figure in the kernel community would be met with a relatively high level of scrutiny in review.
But realistically, when you find out a submitter had malicious intent, I think it's 100% correct to revisit any and all associated submissions since it's quite a different thing to inspect code for correctness, style, etc. as you would in a typical code review process versus trying to find some intentionally obfuscated security hole.
And, frankly, who has time to pick the good from the bad in a case like this? I don't think it's an overreaction at all. IMO, it's a simplification to assume that all associated contributions may be tainted.
Why? Linux is not the state. There is no entitlement to rights or presumption of innocence.
Linux is set up to benefit the linux development community. If UMinn has basically no positive contributions, a bunch of neutral ones and some negative ones banning seems the right call.
Its not about fairness, its about if the hurts outweigh the benefits.
Not only that, good faith actors who are associated with UMN can still contribute, just not in their official capacity as UMN associates (staff, students, researchers, etc).
> Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process
I think the best way to make this expectation reality is putting in the work. The second best way is paying. Doing neither and holding the expectation is a way to exist certainly, but has no impact on the outcome.
> seems to suggest a lack of confidence in the kernel review process
The reviews were done by kernel developers who assumed good faith. That assumption has been proven false. It makes sense to review the patches again.
I mean, it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches. Review processes obviously aren't perfect, but usually patches aren't constructed to sneak sketchy code though. You'd usually approach a review in good faith.
Given that some patches may have made it though with holes, you pull them and re-approach them with a different mindset.
> You'd usually approach a review in good faith.
> it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches
Perhaps the mindset needs to change regarding security? Actual malicious actors seem unlikely to announce themselves for you.
Doesn't this basically prove the original point that if someone or an organization wished to compromise linux, they could do so with crafted bugs in patches?
>I've never performed any meaningful debugging or postmortem ever in my life and might not even know how to program at all.