Comment by Cpoll

4 years ago

A lot of people are talking about the ethical aspects, but could you talk about the security implications of this attack?

From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have already reached the stable trees.

Apologies in advance if my questions are off the mark, but what does this mean in practice?

1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?

2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?

3. Will there be a post-mortem for this attack/attempted attack?

I don't think the attack described in the paper actually succeeded at all, and in fact the paper doesn't seem to claim that it did.

Specifically, I think the three malicious patches described in the paper are:

- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.

- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.

- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.

This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.

edit: wording tweak for clarity

  • > the paper is deliberately misleading in an attempt to overstate its contributions.

    Welcome to academia. Where a large number of students are doing it just for the credentials

    • What else do you expect? The incentive structure in academia pushes students to do this.

      Immigrant graduate students with uncertain future if they fail? Check.

      Vulnerable students whose livelihood is at mercy of their advisor? Check.

      Advisor whose career depends on a large number of publication bullet points in their CV? Check.

      Students who cheat their way through to publish? Duh.

      9 replies →

  • Thank you.

    Question for legal experts,

    Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?

    • Not an attorney but the kernal is likely shielded from liability by it's license. maybe the kernal could sue the contributers for damaging the project but I don't think the end user could.

      2 replies →

I wonder about this me too.

To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let's say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.

Maybe they can pose as 10 different people, in case some of them gets banned.

  • You're still in a better position with open source. The same thing happens in closed source companies.

    See: https://www.reuters.com/article/us-usa-security-siliconvalle...

    "As U.S. intelligence agencies accelerate efforts to acquire new technology and fund research on cybersecurity, they have invested in start-up companies, encouraged firms to put more military and intelligence veterans on company boards, and nurtured a broad network of personal relationships with top technology executives."

    Foreign countries do the same thing. There are numerous public accounts of Chinese nationals or folks with vulnerable family in China engaging in espionage.

    • Plus, wouldn't it be much easier to do this under the guise of equality with some quickly thought up trash contract enforced on all developers?

      One might even say that while this useless attack is taking place, actual people with lifelong commitment to open source software and user freedom get taken out by the "NaN" flavour "NaN" koolaid of the week.

      Soon all that is left that is legal to say is whatever is approved by the "NaN" board. Eventually the number 0 will be found to be exclusionary or accused of "NaN" and we will all be stuck coding unary again.

  • Isn't what you've described pretty much the very definition of advanced persistent threat?

    It's difficult to protect against trusted parties whom you assume, with good reason, and good-faith actors.

    • The fundamental tension is between efficiency and security. Trust permits efficiency, at the cost of security (if that trust is found to be misplaced).

      A perfectly security system is only realized by a perfectly inefficient development process.

      We can get better at lessening the efficiency tax of a given security level (through tooling, tests, audits, etc), but for a given state of tooling, there's still a trade-off.

      Different release trains seem the sanest solution to this problem.

      If you want bleeding-edge, you're going to pull in less-tested (and also less-audited) code. If you want maximum security, you're going to have to deal with 4.4.

I have the same questions. So far we have focused on how bad these "guys" are. Sure, they could have done it differently, etc. However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

How to solve this "issue" without putting too much process around it? That's the challenge.

  • What's next, will they prove how easy it is to break into kernel developers' houses and rob them? Or prove how easy it is to physically assault kernel developers by punching them in the face at conferences? Or prove how easy it is to manipulate kernel developers to lose their life savings investing in cryptocurrency? You can count me out of those...

    Sarcasm aside, pentesting/redteaming is only ethical if the target consents to it! Please don't try to prove your point the way these researchers have.

    • Just playing devil advocate here, the surprising factor does play into it. No bad actor will ever give you heads-up.

      If the researcher has sent these patches under a different identity, that would be just like how malice contributions appear. The maintainers won't assume malice, waste a bunch of time communicating with the bad actor, and may NOT revert their previous potentially harmful contribution.

      6 replies →

  • They proved nothing that wasn't already obvious. A malicious actor can get in vulnerabilities the same way a careless programmer can. Quick, call the press!

    And as for the solutions, their contribution is nil. No suggestions that haven't been suggested, tried and done or rejected a thousand times over.

    • Agreed. So many security vulnerabilities have been created not by malicious actors, but by people who just weren't up to task. Buggy software and exhausted maintainers is nothing new.

      19 replies →

    • So you are just fine knowing that any random guy can sneak any code in the Linux kernel? Honestly, I was personally expecting a higher level of review and attention to such things, considering how used the product is. I don't want to look like the guy that doesn't appreciate what the OSS and FSF communities do everyday even unpaid. However this is unrelated. And probably this is what the researchers tried to prove (with unethical and wrong behavior).

      4 replies →

  • > However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

    What? Are you actually trying to argue that "researchers" proved that code reviews don't have a 100% success rate in picking up bugs and errors?

    Specially when code is pushed in bad faith?

    I mean, think about that for a minute. There are official competitive events to sneak malicious code that are already decades old and going strong[1]. Sneaking vulnerabilities through code reviews is a competitive sport. Are we supposed to feign surprise now?

    [1] https://en.wikipedia.org/wiki/Underhanded_C_Contest

    • Bug bounties are a different beast. Here we are talking about a bunch of guys who deliberately put stuff into your next kernel release because they come from an important university, or whatever other reason. One of the reviewers in the thread admitted that they need to pay more attention to code reviews. That sounds to me like a good first step towards solving this issue. Is that enough, though? It's an unsolvable problem, but is the current solution enough?

      2 replies →

What would be the security implications of these things:

* a black hat writes malware that proves to be capable of taking out a nation's electrical grid. We know that such malware is feasible.

* a group of teenagers is observed to drop heavy stones from a bridge onto a motorway.

* another teenager pointing a relatively powerful laser at the cockpit of a passenger jet which is about to land at night.

* an organic chemist is demonstrating that you can poison 100,000 people by throwing certain chemicals into a drinking water reservoir.

* a secret service subverting software of a big industrial automation company in order to destroy uranium enrichment plants in another country.

* somebody hacking a car's control software in order to kill its driver

What are the security implications of this? That more money should be spent on security? That we should stop to drive on motorways? That we should spent more money on war gear? Are you aware how vulnerable all modern infrastructure is?

And would demonstrating that any of these can practically be done be worth an academic paper? Aren't several of these really a kind of military research?

The Linux kernel community does spend a lot of effort on security and correctness of the kernel. They have a policy of maximum transparency which is good, and known to enhance security. But their project is neither a lab in order to experiment with humans, nor a computer war game. I guess if companies want to have even more security, for running things like nuclear power plants or trains on Linux, they should pay for the (legally required) audits by experts.

I agree with the sentiment. For a project of this magnitude maybe it comes to develop some kind of static analysis along with refactoring the code to make the former possible.

As per the attack surface described in the paper (section IV). Because (III, the acceptance process) is a manpower issue.

  • Ironically, one of their attempts were submitting changes that were allegedly recommended by a static analysis tool.

    • It's possible that they are developing a static analysis tool that is designed to find places where vulnerabilities can be inserted without looking suspicious. That's kind of scary.

      Have they submitted patches to any projects other than the kernel?

      1 reply →