Comment by gnfargbl

4 years ago

From https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...,

> A lot of these have already reached the stable trees.

If the researchers were trying to prove that it is possible to get malicious patches into the kernel, it seems like they succeeded -- at least for an (insignificant?) period of time.

I tangentially followed the debacle unfold for a while and this particular thread now has lead to heated debates on some IRC channels I'm on.

While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.

Two messages down in the same thread, it more or less culminates with the university e-mail suffix being banned from several kernel mailing lists and associated patches being removed[1], which might be an appropriate response to discourage others from similar stunts "for science".

[1] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

  • I'm confused. The cited paper contains this prominent section:

    Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch.

    Are you saying that despite this, these malicious commits made it to production?

    Taking the authors at their word, it seems like the biggest ethical consideration here is that of potentially wasting the time of commit reviewers—which isn't nothing by any stretch, but is a far cry from introducing bugs in production.

    Are the authors lying?

    • >Are you saying that despite this, these malicious commits made it to production?

      Vulnerable commits reached stable trees as per the maintainers in the above email exchange, though the vulnerabilities may not have been released to users yet.

      The researchers themselves acknowledge the patches were accepted in the above email exchange, so it's hard to believe that they're being honest or are fully aware of their ethics violations/vulnerability introductions and that they would've prevented the patches from being released without gkh's intervention.

      6 replies →

    • It seems that Greg K-H has now released a patch of "the easy reverts" of umn.edu commits... all 190 of them. https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

      The final commit in the reverted list (d656fe49e33df48ee6bc19e871f5862f49895c9e) is originally from 2018-04-30.

      EDIT: Not all of the 190 reverted commits are obviously malicious:

      https://lore.kernel.org/lkml/20210421092919.2576ce8d@gandalf...

      https://lore.kernel.org/lkml/20210421135533.GV8706@quack2.su...

      https://lore.kernel.org/lkml/CAMpxmJXn9E7PfRKok7ZyTx0Y+P_q3b...

      https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...

      What a mess these guys have caused.

    • They aren't lying, but their methods are still dangerous despite their implying the contrary. Their approach requires perfection on both the submitter and reviewer.

      The submitter has to remember to send the "warning, don't apply patch" mail in the short time window between confirmation and merging. What happens if one of the students doing this work gets sick and misses some days of work, withdraws from the program, just completely forgets to send the mail?

      What if the reviewer doesn't see the mail in time or it goes to spam?

    • GKH, in that email thread, did find commits that made it to production; most likely the authors just weren't following up very closely.

    • > Are the authors lying?

      In short, yes. Every attempted defense of them has operated by taking their statements at face value. Every position against them has operated by showing the actual facts.

      This may be shocking, but there are some people in this world who rely on other people naively believing their version of events, no matter how much it contradicts the rest of reality.

    • even if they didn't, they waste the community's time.

      I think they are saying that it's possible that some code was branched and used elsewhere, or simply compiled into a running system by a user or developer.

      2 replies →

    • The particular patches being complained about seem to be subsequent work by someone on the team that wrote that paper, but submitted since the paper was published, ie, followup work.

  • > While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.

    I agree. I would say this is kind of a "human process" analog of your typical computer security research, and that this behavior is akin to black hats exploiting a vulnerability. Totally not OK as research, and totally reckless!

    • Yep. To take a physical-world analogy: Would it be okay to try and prove the vulnerability of a country's water supply by intentionally introducing a "harmless" chemical into the treatment works, without the consent of the works owners? Or would that be a go directly to jail sort of an experiment?

      I share the researchers' intellectual curiosity about whether this would work, but I don't see how a properly-informed ethics board could ever have passed it.

      7 replies →

    • Out of interest, is there any way to have some sort of automated way to test this weak link that is human trust? (I understand how absurd this question is)

      It's awfully scary to think about how vulnerabilities might be purposely introduced into this important code base (as well as many other) only to be exploited at a later date for an intended purpose.

      Edit: NM, see st_goliath response below

      https://news.ycombinator.com/item?id=26888538

  • I assume that having these go into production could make the authors "hackers" according to law, no?

    Haven't whitehat hackers doing unsolicited pen-testing been prosecuted in the past?

  • Are there any measures being discussed that could make such attacks harder in future?

    • Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?

      The whole idea of the mailing list based submission process is that it allows others on the list to review your patch sets and point out obvious problems with your changes and discuss them, before the maintainer picks the patches up from the list (if they don't see any problem either).

      As I pointed out elsewhere, there are already test farms and static analysis tools in place. On some MLs you might occasionally see auto generated mails that your patch set does not compile under configuration such-and-such, or that the static analysis bot found an issue. This is already a thing.

      What happened here is basically a con in the patch review process. IRL con men can scam their marks, because most people assume, when they leave the house that the majority of the others outside aren't out there to get them. Except when they run into one where the assumption doesn't hold and end up parted from their money.

      For the paper, bypassing the review step worked in some instances of the many patches they submitted because a) humans aren't perfect, b) have a mindset that most of the time, most people submitting bug fixes do so in good faith.

      Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?

      Yes, I know that this review process isn't perfect, that there are problems and I'm not trying to dismiss any concerns.

      But what technical measure would you propose that can effectively stop con men?

      4 replies →

    • Force the university to take reponsibility for screening their researchers. i.e. a blanket ban, scorched earth approach punishing the entire university's reputation is a good start.

      People want to claim these are lone rogue researchers and good people at the university shouldn't be punished, but this is the only way you can reign in these types of rogues individuals: by getting the collective reputation of the whole university on the line to police their own people. Every action of individual researchers must be assumed to be putting the reputation of the university as a whole on the line. This is the cost of letting individuals operate within the sphere of the university.

      Harsh, "over reaction" punishment is the only solution.

    • The only real fix for this is to improve tooling and/or programming language design to make these kinds of exploits more difficult to slip past maintainers. Lots of folks are working in that space (see recent discussion around Rust), but it's only becoming a priority now that we're seeing the impact of decades of zero consideration for security. It'll take a while to steer this ship into the right direction, and in the meantime the world continues to turn.

    • The University and researchers involved are now default-banned from submitting.

      So yes.

  • If they're public IRC channels, do you mind mentioning them here? I'm trying to find the remnant. :)

There’s no research going on here. Everyone knows buggy patches can get into a project. Submitting intentionally bad patches adds nothing beyond grandstanding. They could perform analysis of review/acceptance by looking at past patches that introduced bugs without being the bad actors that they apparently are.

From FOSDEM 2014, NSA operation ORCHESTRA annual status report. It’s pretty entertaining and illustrates that this is nothing new.

https://archive.fosdem.org/2014/schedule/event/nsa_operation... https://www.youtube.com/watch?v=3jQoAYRKqhg

  • > They could perform analysis of review/acceptance by looking at past patches that introduced bugs without being the bad actors that they apparently are.

    Very good point.

It may be unethical from an academic perspective, but I like that they did this. It shows there is a problem with the review process if it is not catching 100% of this garbage. Actual malicious actors are certainly already doing worse and maybe succeeding.

In a roundabout way, this researcher has achieved their goal, and I hope they publish their results. Certainly more meaningful than most of the drivel in the academic paper mill.

  • It more shows up a very serious problem with the incentives present in scientific research and a poisonous culture which obviously seems to reward malicious behavior. Science enjoys a lot of freedom and trust from citizens but this trust must not be misused. If some children playing throw fireworks under your car, or mix sugar into the gas tank, just to see how you react, this would have negative community effects, too. Adult scientists should be totally aware of that.

    This will lead in effect to that even valuable contributions from universities will be seen with more suspicion and will be very damaging in the long run.

  • >It shows there is a problem with the review process if it is not catching 100% of this garbage

    What review process catches 100% garbage? It's a mechanism to catch 99% of garbage -- otherwise Linux kernel would have no bugs.

    • It does raise questions though. Should there be a more formal scrutiny process for less trusted developers? Some kind of background check process?

      Runs counter to how open source is ideally written, but for such a core project, perhaps stronger checks are needed.

      1 reply →

  • I'm not sure what we learned. Were we under the impression that it's impossible to introduce new (security) bugs in Linux?

    • > Were we under the impression that it's impossible to introduce new (security) bugs in Linux?

      I've heard it many times that they're thoroughly reviewed and back doors are very unlikely. So yes, some people were under the impression.

      2 replies →

  • The paper indicates that the goal is to prove that OSS in particular is vulnerable to this attack, but it seems that any software development ecosystem shares the same weaknesses. The choice of an OSS target seems to be one of convenience as the results can be publicly reviewed and this approach probably avoids serious consequences like arrests or lawsuits. In that light, their conclusions are misleading, even if the attack is technically feasible. They might get more credibility if they back off the OSS angle.

    • Not really. You can't introduce bugs like this into my companies code base because the code is protected from random people on the internet accessing it. So your first step would be to find an exploitable bug in github, but then you are bypassing peer review as well to get in. (Actually I think we would notice that, but that is more because of a process we happen to have that most don't)

      2 replies →

  • > It shows there is a problem with the review process if it is not catching 100% of this garbage.

    Does that add anything new to what we know since the creation of the "obfuscated C contest" in 1984?

  • > It shows there is a problem with the review process if it is not catching 100% of this garbage.

    It shows nothing of the sort. No review process is 100% foolproof, and opensource means that everything can be audited if it is important to you.

    The other option is closed source everything and I can guarentee that review processes let stuff through, even if its only "to meet deadlines" and you will unlikely be able to audit it.

  • Unable to follow the kernel thread (stuck in an age between twitter and newsgroups, sorry), but...

    did these "researchers" in any way demonstrate that they were going to come clean about what they had done before their "research" made to anywhere close to release/GA?

  • By your logic, you allow recording people without their consent, experimenting on PTSD by inducing PTSD without people consent, or medical experimentation without the subject consent.

    Try to introduce yourself in the White House and when you get caught tell them "I was just testing your security procedures".

I think that the patches that hit stable were actually OK, based on the apparent intent to 'test' the maintainers and notify them of the bug and submit the valid patch after, but the thought process from the maintainers is:

"if they are attempting to test us by first submitting malicious patches as an experiment, we can't accept what we have accepted as not being malicious and so it's safer to remove them than to keep them".

my 2c.

  • The earlier patches could in theory be OK, but they also might combine with other or later patches which introduce bugs more stealthily. Bugs can be very subtle.

    Obviously, trust should not be the only thing that maintainers rely on, but it is a social endeavour and trust always matters in such endeavors. Doing business with people you can't trust makes no sense. Without trust I agree fully that it is not worth the maintainer's time to accept anything from such people, or from that university.

    And the fact that one can do damage with malicious code is nothing new at all. It is well known and nothing new that bad code can ultimately kill people. It is also more than obvious that I can ring the door of my neighbor, ask him or her for a cup of sugar, and blow a hammer over their head. Or people can go to a school and shoot children. Does anyone in his right mind has to do such damage in order to prove something? No. Does it prove anything? No. Does the fact that some people do things like that "prove" that society is wrong and trust and collaboration is wrong? What an idiocy, of course not!

It is worrying to consider that in all likelihood, some people with actually malicious motives, rather than clinical academic curiosity, have probably introduced introduced serious security bugs into popular FOSS projects such as the Linux kernel.

Before this study came out, I'm pretty sure there were already known examples of this happening, and it would have been reasonable to assume that some such vulnerabilities existed. But now we have even more reason to worry, given that they succeeded doing this multiple times as a two person team without real institutional backing. Imagine what a state-level actor could do.

  • The same can be said about any software, really. It’s all too easy for a single malicious dev to introduce security bugs in pretty much any project they are involved.

I wonder whether they broke any laws intentionally putting bugs in software that is critical to national security.