← Back to context

Comment by gregkh

4 years ago

Thanks for the support.

I also now have submitted a patch series that reverts the majority of all of their contributions so that we can go and properly review them at a later point in time: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

Just wanted to say thanks for your work!

As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".

Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.

So thanks. Keep it up.

  • How could resilience be verified after asking for consent?

    • Tell someone upstream - in this case Greg KH - what you want to do and agree on a protocol. Inform him of each patch you submit. He's then the backstop against anything in the experiment actually causing harm.

A lot of people are talking about the ethical aspects, but could you talk about the security implications of this attack?

From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have already reached the stable trees.

Apologies in advance if my questions are off the mark, but what does this mean in practice?

1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?

2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?

3. Will there be a post-mortem for this attack/attempted attack?

  • I don't think the attack described in the paper actually succeeded at all, and in fact the paper doesn't seem to claim that it did.

    Specifically, I think the three malicious patches described in the paper are:

    - UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.

    - UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.

    - UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.

    This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.

    edit: wording tweak for clarity

    • > the paper is deliberately misleading in an attempt to overstate its contributions.

      Welcome to academia. Where a large number of students are doing it just for the credentials

      10 replies →

    • Thank you.

      Question for legal experts,

      Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?

      3 replies →

  • I wonder about this me too.

    To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let's say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.

    Maybe they can pose as 10 different people, in case some of them gets banned.

    • You're still in a better position with open source. The same thing happens in closed source companies.

      See: https://www.reuters.com/article/us-usa-security-siliconvalle...

      "As U.S. intelligence agencies accelerate efforts to acquire new technology and fund research on cybersecurity, they have invested in start-up companies, encouraged firms to put more military and intelligence veterans on company boards, and nurtured a broad network of personal relationships with top technology executives."

      Foreign countries do the same thing. There are numerous public accounts of Chinese nationals or folks with vulnerable family in China engaging in espionage.

      2 replies →

    • Isn't what you've described pretty much the very definition of advanced persistent threat?

      It's difficult to protect against trusted parties whom you assume, with good reason, and good-faith actors.

      1 reply →

  • I have the same questions. So far we have focused on how bad these "guys" are. Sure, they could have done it differently, etc. However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

    How to solve this "issue" without putting too much process around it? That's the challenge.

    • What's next, will they prove how easy it is to break into kernel developers' houses and rob them? Or prove how easy it is to physically assault kernel developers by punching them in the face at conferences? Or prove how easy it is to manipulate kernel developers to lose their life savings investing in cryptocurrency? You can count me out of those...

      Sarcasm aside, pentesting/redteaming is only ethical if the target consents to it! Please don't try to prove your point the way these researchers have.

      7 replies →

    • They proved nothing that wasn't already obvious. A malicious actor can get in vulnerabilities the same way a careless programmer can. Quick, call the press!

      And as for the solutions, their contribution is nil. No suggestions that haven't been suggested, tried and done or rejected a thousand times over.

      26 replies →

    • > However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

      What? Are you actually trying to argue that "researchers" proved that code reviews don't have a 100% success rate in picking up bugs and errors?

      Specially when code is pushed in bad faith?

      I mean, think about that for a minute. There are official competitive events to sneak malicious code that are already decades old and going strong[1]. Sneaking vulnerabilities through code reviews is a competitive sport. Are we supposed to feign surprise now?

      [1] https://en.wikipedia.org/wiki/Underhanded_C_Contest

      3 replies →

  • What would be the security implications of these things:

    * a black hat writes malware that proves to be capable of taking out a nation's electrical grid. We know that such malware is feasible.

    * a group of teenagers is observed to drop heavy stones from a bridge onto a motorway.

    * another teenager pointing a relatively powerful laser at the cockpit of a passenger jet which is about to land at night.

    * an organic chemist is demonstrating that you can poison 100,000 people by throwing certain chemicals into a drinking water reservoir.

    * a secret service subverting software of a big industrial automation company in order to destroy uranium enrichment plants in another country.

    * somebody hacking a car's control software in order to kill its driver

    What are the security implications of this? That more money should be spent on security? That we should stop to drive on motorways? That we should spent more money on war gear? Are you aware how vulnerable all modern infrastructure is?

    And would demonstrating that any of these can practically be done be worth an academic paper? Aren't several of these really a kind of military research?

    The Linux kernel community does spend a lot of effort on security and correctness of the kernel. They have a policy of maximum transparency which is good, and known to enhance security. But their project is neither a lab in order to experiment with humans, nor a computer war game. I guess if companies want to have even more security, for running things like nuclear power plants or trains on Linux, they should pay for the (legally required) audits by experts.

  • I agree with the sentiment. For a project of this magnitude maybe it comes to develop some kind of static analysis along with refactoring the code to make the former possible.

    As per the attack surface described in the paper (section IV). Because (III, the acceptance process) is a manpower issue.

As an Alumni of the University of Minnesota's program I am appalled this was even greenlit. It reflects poorly on all graduates of the program, even those uninvolved. I am planning to email the department head with my disapproval as an alumni, and I am deeply sorry for the harm this caused.

  • I am wondering if UMN will now get a bad name in Open Source and any contribution with their email will require extra care.

    And if this escalate to MSM Media it might also damage future employment status from UMN CS students.

    Edit: Looks like they made a statement. https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...

    • > Leadership in the University of Minnesota Department of Computer Science & Engineering learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel.

      - Signed by “Loren Terveen, Associate Department Head”, who was a co-author on numerous papers about experimenting on Wikipedia, as pointed out by: https://news.ycombinator.com/item?id=26895969

      2 replies →

    • It should. Ethics begins at the top, and if the university has shown itself to be this untrustworthy then no trust can be had on them or any students they implicitly endorse.

      As far as I'm concerned this university and all of its alumni are radioactive.

      34 replies →

  • Based on my time in a university department you might want to cc whoever chairs the IRB or at least oversees its decisions for the CS department. Seems like multiple incentives and controls failed here, good on you for applying the leverage available to you.

    • I'm genuinely curious how this was positioned to the IRB and if they were clear that what they were actually trying to accomplish was social engineering/manipulation.

      Being a public university, I hope at some point they address this publicly as well as list the steps they are (hopefully) taking to ensure something like this doesn't happen again. I'm also not sure how they can continue to employ the prof in question and expect the open source community to ever trust them to act in good faith going forward.

      7 replies →

I hope they take this bad publicity and stop (rather than escalating stupidity by using non university emails).

What a joke - not sure how they can rationalize this as valuable behavior.

  • It was a real world penetration test that showed some serious security holes in the code analysis/review process. Penetration tests are always only as valuable as your response to them. If they chose to do nothing about their code review/analysis process, with these vulnerabilities that made it in (intentional or not), then yes, the exercise probably wasn't valuable.

    Personally, I think all contributors should be considered "bad actors" in open source software. NSA, some university mail address, etc. I consider myself a bad actor, whenever I write code with security in mind. This is why I use fuzzing and code analysis tools.

    Banning them was probably the correct action, but not finding value requires intentionally ignoring the very real result of the exercise.

    • I agree. They should take this as a learning opportunity and see what can be done to improve security and detect malicious code being introduced into the project. What's done is done, all that matters is how you proceed from here. Banning all future commits from UMN was the right call. I mean it seems like they're still currently running follow up studies on the topic.

      However I'd also like to note that in a real world penetration test on an unwitting and non-consensual company, you also get sent to jail.

      Everybody wins! The team get valuable insight on the security of the current system and unethical researchers get punished!

      1 reply →

    • The result is to make sure not to accept anything with the risk of introducing issues.

      Any patch coming from somebody having intentionally introduced an issue falls into this category.

      So, banning their organization from contributing is exactly the lesson to be learned.

      1 reply →

    • Next time you rob a bank, try telling the judge it was a real world pentest. See how well that works out for you.

    • > It was a real world penetration test that showed some serious security holes in the code analysis/review process.

      So you admit it was a malicious breach? Of course it isn't a perfect process. Everyone knows it isn't absolutely perfect. What kind of test is that?

I would implore you to maintain the ban, no matter how hard the university tries to make ammends. You sent a very clear message that this type of behavior will not be tolerated, and organizations should take serious measures to prevent malicious activities taking place under their purview. I commend you for that. Thanks for your hard work and diligence.

  • I'd disagree. Organizations are collections of actors, some of which may have malicious intents. As long as the organization itself does not condone this type of behavior, has mechanisms in place to prevent such behavior, and has actual consequences for malicious actors, then the blame should be placed on the individual, not the organization.

    In the case of research, universities are required to have an ethics board that reviews research proposals before actual research is conducted. Conducting research without an approval or misrepresenting the research project to the ethics board are pretty serious offenses.

    Typically for research that involves people, participants in the research require having a consent form that is signed by participants, alongside a reminder for participants that they can withdraw that consent at any time without any penalties. It's pretty interesting that in this case, there seemed to have been no real consent required, and it would be interesting to know whether there has been an oversight by the ethics board or a misrepresentation of the research by the researchers.

    It will be interesting to see whether the university applies a penalty to the professor (removal of tenure, termination, suspension, etc.) or not. The latter would imply that they're okay with unethical or misrepresented research being associated with their university, which would be pretty surprising.

    In any case, it's a good thing that the Linux kernel maintainers decided that experimenting on them isn't acceptable and disrespectful of their contributions. Subjecting participants to experiments without their consent is a severe breach of ethical duty, and I hope that the university will apply the correct sanctions to the researchers and instigators.

    • Good points. I should have qualified my statement by saying that IMO the ban should stay in place for at least five years. A prison sentence, if you will, for the offense that was committed by their organization. I completely agree with you though that no organization can have absolute control over the humans working for them, especially your point about misrepresenting intentions. However, I believe that by handing out heavy penalties like this, not only will it make organizations think twice before approving questionable research, it will also help prevent malicious researchers from engaging in this type of activity. I don't imagine it's going to look great being the person who got an entire university banned from committing to the Linux kernel.

      Of course, in a few years this will all be forgotten. It begs the question... how effective is it to ban entire organizations due to the actions of a few people? Part of me thinks that it would very good to have something like this happen every five years (because it puts the maintainers on guard), but another part of me recognizes that these maintainers are working for free, and they didn't sign up to be gaslighted, they signed up to make the world a better place. It's not an easy problem.

      1 reply →

    • It turns out that the Associate Department Head was engaged in similar "research" on Wikipedia over a dozen years ago, and that also caused problems. The fact that they are here again suggests a broader institutional problem.

I have to ask: were they not properly reviewed when they were first merged?

Also to assume _all_ commits made by UMN, beyond what's been disclosed in the paper, are malicious feels a bit like an overreaction.

Thanks for your important work, Greg!

I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).

Would a project like this be unfeasible due to the sheer amount of commits/day?

> should be aware that future submissions from anyone with a umn.edu address should be by default-rejected

Are you not concerned these malicious "researches" will simply start using throwaway gmail addresses?

  • That’s not likely to work after a high profile incident like this, in the short term or the long term. Publication is, by design, a de-anonymizing process.

Putting the ethical question of the researcher aside, the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.

While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.

  • > This "I will ban you and revert all your stuff" retaliation seems emotional overaction.

    Fool me once. Why should they waste their time with extra scrutiny next time? Somebody deliberately misled them, so that's it, banned from the playground. It's just a no-nonsense attitude, without which you'd get nothing done.

    If you had a party in your house and some guest you don't know and whom you invited in assuming good faith, turned out to deliberately poop on the rug in your spare guest room while nobody was looking .. next time you have a party, what do you do? Let them in but keep an eye on them? Ask your friends to never let this guest alone? Or just simply to deny entrance, so that you can focus on having fun with people you trust and newcomers who have not shown any malicious intent?

    I know what I'd do. Life is too short for BS.

    • > Why should they waste their time with extra scrutiny next time?

      Because well funded malicious actors (government agencies, large corporations, etc) exist and aren't so polite as to use email addresses that conveniently link different individuals from the group together. Such actors don't publicize their results, aren't subject to IRB approval, and their exploits likely don't have such benign end goals.

      As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.

      9 replies →

    • You’re completely right, except in this case it’s banning anyone who happened to live in the same house as the offender, at any point in time...

      7 replies →

  • The fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

    Basically, yes. The kernel review process does not catch 100% of intentionally introduced security flaws. It isn't perfect, and I don't think anyone is claiming that it is perfect. Whenever there's an indication that a group has been intentionally introducing security flaws, it is just common sense to go back and put a higher bar on reviewing it for security.

  • Not all kernel reviewers are being paid by their employer to review patches. Kernel reviews are "free" to the contributor because everyone operates on the assumption that every contributor wants to make Linux better by contributing high-quality patches. In this case, multiple people from the University have decided that reviewers' time isn't valuable (so it's acceptable to waste it) and that the quality of the Kernel isn't important (so it's acceptable to make it worse on purpose). A ban is a completely appropriate response to this, and reverting until you can review all the commits is an appropriate safety measure.

    Whether or not this indicates flaws in the review process is a separate issue, but I don't know how you can justify not reverting all the commits. It'd be highly irresponsible to leave them in.

  • I guess what I am trying to get at is that this researcher's action does have its merit. This event does rise awareness of what sophisticated attacker group might try to do to kernel community. Admitting this would be the first step to hardening the kernel review process to prevent this kind of harm from happening again.

    What I strongly disapprove of the researcher is that apparently no steps are taken to prevent real world consequences of malicious patches getting into kernel, I think the researcher should:

    - Notify the kernel community promptly once malicious patches got past all review processes.

    - Time these actions well such that malicious patches won't not get into a stable branch before they could be reverted.

    ----------------

    Edit: reading the paper provided above, it seems that they did do both actions above. From the paper:

    > Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

    So, unless the kernel maintenance team have another side of the story. The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.

    • That paper came out a year ago, and they got a lot of negative feedback about it, as you might expect. Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.

      This time two reviewers noticed that the patch was useless, and then Greg stepped in (three weeks later) saying that this was a repetition of the same bad behavior from the first study. This got a response from the author of the patch, who said that this and other statements were “wild accusations that are bordering on slander”.

      https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

      2 replies →

    • We threw people off buildings to gauge how they would react, but were able to catch all 3 subjects in a net before they hit the ground.

      Just because their actions didn’t cause damage doesn’t mean they weren’t negligent.

      6 replies →

    • To add... Ideally, they should have looped in Linus, or someone high-up in the chain of maintainers before running an experiment like this. Their actions might have been in good faith, but the approach they undertook (including the email claiming slander) is seriously irresponsible and a sure shot way to wreck relations.

      2 replies →

    • > This event does rise awareness of what sophisticated attacker group might try to do to kernel community.

      The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)

      1 reply →

    • > The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.

      Under that logic, it's ok for me to run a pen test against your computers, right? ...because I'm only wasting your time.... Or maybe to hack your bank account, but return the money before you notice.

      Slippery slope, my friend.

      3 replies →

    • I wouldn't put it past them to have a second unpublished paper, for the "we didn't get caught" timeline.

      It would give the University some notoriety to be able to claim "We introduced vulnerabilities in Linux". It would put them in good terms with possible propietary software sponsors, and the military.

  • > the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

    I don't think this necessarily follows. Rather it is fundamentally a resource allocation issue.

    The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated. Instead it's a more nebulous standard of "reasonable assurance", where "reasonable" is a variable function of what must be sacrificed to perform a more thorough review, how critical the patch appears at first impression, and information relating to provenance of the patch.

    By assimilating new information about the provenance of the patch (that it's coming from a group of people known to add obfuscated bugs), that standard rises, as it should.

    Alternatively stated, there is some desired probability that an approved patch is bug-free (or at least free of any bugs that would threaten security). Presumably, the review process applied to a patch from an anonymous source (meaning the process you are implying suffers from a lack of confidence) is sufficient such that the Bayesian prior for a hypothetical "average anonymous" reviewed patch reaches the desired probability. But the provenance raises the likelihood that the source is malicious, which drops the probability such that the typical review for an untrusted source is not sufficient, and so a "proper review" is warranted.

    > it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.

    That's hard to argue with, and ironically the point of the research at issue. It does imply that there's a need for some kind of "trust network" or interpersonal vetting to take the load off of code review.

    • > The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated.

      Nobody can assure that.

  • In a perfect world, I would agree that the work of a researcher who's not an established figure in the kernel community would be met with a relatively high level of scrutiny in review.

    But realistically, when you find out a submitter had malicious intent, I think it's 100% correct to revisit any and all associated submissions since it's quite a different thing to inspect code for correctness, style, etc. as you would in a typical code review process versus trying to find some intentionally obfuscated security hole.

    And, frankly, who has time to pick the good from the bad in a case like this? I don't think it's an overreaction at all. IMO, it's a simplification to assume that all associated contributions may be tainted.

  • Why? Linux is not the state. There is no entitlement to rights or presumption of innocence.

    Linux is set up to benefit the linux development community. If UMinn has basically no positive contributions, a bunch of neutral ones and some negative ones banning seems the right call.

    Its not about fairness, its about if the hurts outweigh the benefits.

    • Not only that, good faith actors who are associated with UMN can still contribute, just not in their official capacity as UMN associates (staff, students, researchers, etc).

  • > Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process

    I think the best way to make this expectation reality is putting in the work. The second best way is paying. Doing neither and holding the expectation is a way to exist certainly, but has no impact on the outcome.

  • > seems to suggest a lack of confidence in the kernel review process

    The reviews were done by kernel developers who assumed good faith. That assumption has been proven false. It makes sense to review the patches again.

  • I mean, it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches. Review processes obviously aren't perfect, but usually patches aren't constructed to sneak sketchy code though. You'd usually approach a review in good faith.

    Given that some patches may have made it though with holes, you pull them and re-approach them with a different mindset.

    • > You'd usually approach a review in good faith.

      > it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches

      Perhaps the mindset needs to change regarding security? Actual malicious actors seem unlikely to announce themselves for you.

    • Doesn't this basically prove the original point that if someone or an organization wished to compromise linux, they could do so with crafted bugs in patches?

  • >I've never performed any meaningful debugging or postmortem ever in my life and might not even know how to program at all.

This might not be on purpose. If you look at their article they're studying how to introduce bugs that are hard to detect not ones that are easy to detect.

> Thanks for the support.

THANK YOU! After reading the email chain, I have a much greater appreciation for the work you do for the community!

My deepest thanks for all your work, as well as for keeping the standards high and the integrity of the project intact!

Well, you or whoever was the responsible maintainer completely failed in reviewing these patches, which is your whole job as a maintainer.

Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.

Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.

  • On the contrary, it would be the easy, lazy way out for a maintainer to say “well this incident was a shame now let’s forget about it.” The extra work the kernel devs are putting in here should be commended.

    In general, it is the wrong attitude to say, oh we had a security problem. What a fiasco! Everyone involved should be fired! With a culture like that, all you guarantee is that people cover up the security issues that inevitably occur.

    Perhaps this incident actually does indicate that kernel code review procedures should be changed in some way. I don’t know, I’m not a kernel expert. But the right way to do that is with a calm postmortem after appropriate immediate actions are taken. Rolling back changes made by malicious actors is a very reasonable immediate action to take. After emotions have cooled, then it’s the right time to figure out if any processes should be changed in the future. And kernel devs putting in extra work to handle security incidents should be appreciated, not criticized for their imperfection.

  • Greg explicitly stated "Because of this, all submissions from this group must be reverted from the kernel tree and will need to be re-reviewed again to determine if they actually are a valid fix....I will be working with some other kernel developers to determine if any of these reverts were actually valid changes, were actually valid, and if so, will resubmit them properly later. For now, it's better to be safe."