Comment by incrudible
4 years ago
This isn't some employer-employee trust relationship. The whole point of the test is that you can't trust a patch just because it's from some university or some major company.
4 years ago
This isn't some employer-employee trust relationship. The whole point of the test is that you can't trust a patch just because it's from some university or some major company.
The vast majority of patches are not malicious. Sending a malicious patch (one that is known to introduce a vulnerability) is a malicious action. Sending a buggy patch that creates a vulnerability by accident is not a malicious action.
Given the completely unavoidable limitations of the review and bug testing process, a maintainer has to react very differently when they have determined that a patch is malicious - all previous patches past from that same source (person or even organization) have to be either re-reviewed at a much higher standard or reverted indiscriminately; and any future patches have to be rejected outright.
This puts a heavy burden on a maintainer, so intentionally creating this type of burden is a malicious action regardless of intent. Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.
> The vast majority of patches are not malicious.
The vast majority of drunk drivers never kill anyone.
> Sending a malicious patch (one that is known to introduce a vulnerability) is a malicious action.
I disagree that it's malicious in this context, but that's irrelevant really. If the patch gets through, then that proves one of the most critical pieces of software could relatively easily be infiltrated by a malicious actor, which means the review process is broken. That's what we're trying to figure out here, and there's no better way to do it than replicate the same conditions under which such patches would ordinarily be reviewed.
> Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.
Yes, everyone knows that patches can introduce vulnerabilities if they are not found. We want to know whether they are found! If they are not found, we need to figure out how they slipped by and how to prevent that from happening in the future.
Since humanity still hasn't fixed the problem of drunk drivers I guess I'll start driving drunk on the weekends to illustrate the flaws of the system.
> If the patch gets through, then that proves one of the most critical pieces of software could relatively easily be infiltrated by a malicious actor, which means the review process is broken.
That is a complete misunderstanding of the Linux dev process. No one expects the first reviewer of a patch (the person that the researchers were experimenting on) to catch any bug. The dev process has many safeguards - several reviewers, testing, static analysis tools, security research, distribution testing, beta testers, early adopters - that are expected to catch bugs in the kernel at various stages.
Trying to deceive early reviewers into accepting malicious patches for research purposes is both useless research and hurtful to the developers.
Open source runs on trust, of both individuals and institutions. There’s no alternative. Processes like code review can supplement but not replace it.
So my question is, what is a kernel? Is it a security project? Should security products rely on trust, or assume malicuous intent?
Open source products rely on trust. There is no way to build a trust-less open source product. Of course, the old mantra of trust, but verify is very important as well.
But the linux kernel is NOT a security product - it is a kernel. It can be used in entirely disconnected devices that couldn't care less about security, as well as in highly secure infrastructure that powers the world. The ultimate responsibility of delivering a secure product based on Linux lies with the people delivering a secure product based on the kernel. The kernel is a essentially library, not a product. If someone is assuming they can build a secure product by trusting Linux to be "secure" than they are simply wrong, and no amount of change in the Linux dev process will fix their assumption.
Of course, you want the kernel to be as secure as possible, but you also want many other things from the kernel as well (it should be featureful, it should be backwards compatible with userspace, it should run on as many architectures as needed, it should be fast and efficient, it should be easy to read and maintain etc).