← Back to context

Comment by tcelvis

4 years ago

I guess what I am trying to get at is that this researcher's action does have its merit. This event does rise awareness of what sophisticated attacker group might try to do to kernel community. Admitting this would be the first step to hardening the kernel review process to prevent this kind of harm from happening again.

What I strongly disapprove of the researcher is that apparently no steps are taken to prevent real world consequences of malicious patches getting into kernel, I think the researcher should:

- Notify the kernel community promptly once malicious patches got past all review processes.

- Time these actions well such that malicious patches won't not get into a stable branch before they could be reverted.

----------------

Edit: reading the paper provided above, it seems that they did do both actions above. From the paper:

> Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

So, unless the kernel maintenance team have another side of the story. The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.

That paper came out a year ago, and they got a lot of negative feedback about it, as you might expect. Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.

This time two reviewers noticed that the patch was useless, and then Greg stepped in (three weeks later) saying that this was a repetition of the same bad behavior from the first study. This got a response from the author of the patch, who said that this and other statements were “wild accusations that are bordering on slander”.

https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

  • > Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.

    I'd hate to be the PhD student that wastes away half a dozen years of his/her life writing a document on how to sneak buggy code through a code review.

    More than being pointless and boring, it's a total CV black hole. It's the worst of both worlds: zero professional experience to show for, and zero academic portfolio to show for.

We threw people off buildings to gauge how they would react, but were able to catch all 3 subjects in a net before they hit the ground.

Just because their actions didn’t cause damage doesn’t mean they weren’t negligent.

  • Strangers submitting patches to the kernel is completely normal, where throwing people off is not. A better analogy would involve decades of examples of bad actors throwing people off the bridge, then being surprised when someone who appears friendly does it.

    • Your analogy also isn't the best because it heavily suggests the nefarious behavior is easy to identify (throwing people off a bridge). This is more akin to people helping those in need to cross a street. At first, it is just people helping people. Then, someone comes along and starts to "help" so that they can steal money (introduce vulnerabilities) to the unsuspecting targets. Now, the street-crossing community needs to introduce processes (code review) to look out for these bad actors. Then, someone who works for the city and is wearing the city uniform (University of Minnesota CS department) comes along saying there here to help and the community is a bit more trustful as they have dealt with other city workers before. The city worker then steals from the people in need and then proclaims "Aha, see how easy it is!" No one is surprised and just thinks they are assholes.

      Sometimes, complex situations don't have simple analogies. I'm not even sure mine is 100% correct.

    • While submitting patches is normal submitting malicious patches is abnormal and antisocial. Certainly bad actors will do it, but by that logic these researchers are bad actors.

      Just like bumping into somebody on the roof is normal, but you should always be aware that there’s a chance they might try to throw you off. A researcher highlighting this fact by doing it isn’t helping, even if they mitigate their damage.

      A much better way to show what they are attempting to is to review historic commits and try to find places where malicious code slipped through, and how the community responded. Or to solicit experimenters to follow normal processes on a fake code base for a few weeks.

    • > Strangers submitting patches to the kernel is completely normal, where throwing people off is not.

      Strangers submitting patches might be completely normal.

      Malicious strangers trying to sneak vulnerabilities by submitting malicious patches devised to exploit the code review process is not normal. At all.

      There are far more news reports of deranged people throwing strangers under traffic, subways, and trains, than there are reports of malicious actors trying to sneak vulnerable patches.

      1 reply →

  • We damaged the brake cables mechanics were installing into people's cars to find out if they were really inspecting them properly prior to installation!

To add... Ideally, they should have looped in Linus, or someone high-up in the chain of maintainers before running an experiment like this. Their actions might have been in good faith, but the approach they undertook (including the email claiming slander) is seriously irresponsible and a sure shot way to wreck relations.

  • Greg KH is "someone high-up in the chain." I remember submitting patches to him over 20 years ago. He is one of Linus's trusted few.

    • Yes, and the crux of the problem is that they didn’t get assent/buy-in from someone like that before running the experiment.

> This event does rise awareness of what sophisticated attacker group might try to do to kernel community.

The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)

  • > The limits of code review are quite well known

    That's factually incorrect. The arguments over what constitutes proper code reviews continues to this day with few comprehensive studies about syntax, much less code reviews - not "do you have them" or "how many people" but methodology.

    > it appears very questionable what scientific knowledge is actually gained here

    The knowledge isn't from the study existing, but the analysis of the data collected.

    I'm not even sure why people are upset at this, since it's a very modern approach to investigating how many projects are structured to this day. This was a daring and practical effort.

> The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.

Under that logic, it's ok for me to run a pen test against your computers, right? ...because I'm only wasting your time.... Or maybe to hack your bank account, but return the money before you notice.

Slippery slope, my friend.

  • Ethics aside, warning someone that a targeted penetration test is coming will change their behavior.

    > Under that logic, it's ok for me to run a pen test against your computers, right?

    I think the standard for an individual user should be different than that for the organization who is, in the end, responsible for the security of millions of those individual users. One annoys one person, one prevents millions from being annoyed.

    Donate to your open source projects!

    • > Ethics aside, warning someone that a targeted penetration test is coming will change their behavior.

      They could discuss the idea and then perform the test months later? With the amount of patches that had to be reverted as precaution the test would have been well hidden in the usual workload even if the maintainers knew that someone at some point in the past mentioned the possibility of a pen test. How long can the average human stay vigilant if you tell them they will be robbed some day this year?

    • That's why for pen testing, you still warn people, but you do it high enough the chain that the individual behaviors and responses are not affected.

I wouldn't put it past them to have a second unpublished paper, for the "we didn't get caught" timeline.

It would give the University some notoriety to be able to claim "We introduced vulnerabilities in Linux". It would put them in good terms with possible propietary software sponsors, and the military.