Comment by neoflame
4 years ago
I don't think the attack described in the paper actually succeeded at all, and in fact the paper doesn't seem to claim that it did.
Specifically, I think the three malicious patches described in the paper are:
- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.
- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.
- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.
This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.
edit: wording tweak for clarity
> the paper is deliberately misleading in an attempt to overstate its contributions.
Welcome to academia. Where a large number of students are doing it just for the credentials
What else do you expect? The incentive structure in academia pushes students to do this.
Immigrant graduate students with uncertain future if they fail? Check.
Vulnerable students whose livelihood is at mercy of their advisor? Check.
Advisor whose career depends on a large number of publication bullet points in their CV? Check.
Students who cheat their way through to publish? Duh.
The ethics in big-lab science are as dire as you say, but I've generally got the impression that the publication imperative has not been driving so much unethical behaviour in computer science. I regard this as particularly cynical behaviour by the standards of the field and I think the chances are good that the article will get retracted.
1 reply →
Can I cite your comment in exchange for a future citation?
1 reply →
Feigning surprise isn't helpful.
It's good to call out bad incentive structures, but by feigning surprise you're implying that we shouldn't imagine a world where people behave morally when faced with an incentive/temptation.
4 replies →
Thank you.
Question for legal experts,
Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?
Not an attorney but the kernal is likely shielded from liability by it's license. maybe the kernal could sue the contributers for damaging the project but I don't think the end user could.
Malicious intent or personal gain negate that sort of thing in civil torts.
Also US code 1030(a)5 A does not care about software license. Any intentional vulnerability added to code counts. Federal cybercrime laws are not known for being terribly understanding…
License is a great catch, thank you. Do the kernel get into separate contract with the contributors?
I literally LOL'd at "James Louise Bond"