Comment by endisneigh

4 years ago

Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

That being said, I think it would've made more sense for them to have created some dummy complex project for a class and have say 80% of the class introduce "good code", 10% of the class review all code and 10% of the class introduce these "hypocrite" commits. That way you could do similar research without having to potentially break legit code in use.

I say this since the crux of what they're trying to discover is:

1. In OSS anyone can commit.

2. Though people are incentivized to reject bad code, complexities of modern projects make 100% rejection of bad code unlikely, if not impossible.

3. Malicious actors can take advantage of (1) and (2) to introduce code that does both good and bad things such that an objective of theirs is met (presumably putting in a back-door).

They could have contacted a core maintainer and explained to them what they planned to do. That core maintainer could have then spoken to other senior core maintainers in confidence (including Greg and Linus) to decide if this type of pentest was in the best interest of Linux and the OSS community at large. That decision would need to weigh the possibility of testing and hardening Linux's security review process against possible reputational damage as well as alienating contributors who might quite rightly feel they've been publicly duped.

If leadership was on board, they could have then proceeded with the test under the supervision of those core maintainers who ensure introduced security holes don't find their way into stable. The insiders themselves would abstain from reviewing those patches to see if review by others catches them.

If leadership was not on board, they should have respected the wishes of the Linux team and found another high-visibility open-source project who is more amenable to the project. There are lots of big open-source projects to choose from, the kernel simply happens to be high-profile.

  • Exactly. A test could have been conducted the knowledge of Linus and Greg K-H, but not of the other maintainers. If the proposed patch made it all the way through, it could be blocked at the last stage from making it into an actual release or release candidate. But it should be up to the people in charge of the project whether they want to be experimented on.

  • I don't disagree, but the point of the research is more to point out a flaw in how OSS supposedly is conducted, not to actually introduce bugs. If you agree with what they were researching (and I don't) any sort of pre-emptive disclosure would basically contradict the point of their research.

    I still think the best thing for them would be to simply create their own project and force their own students to commit, but they probably felt that doing that would be too contrived.

    • Pentesting has wide accepted standards and protocols.

      You don't test a bank or Fortune 500 security system without buy-in of leadership ahead of time.

      2 replies →

> Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

they could've done the much harder work of studying all of the incoming patches looking for bugs, and then just not reporting their findings until the kernel team accepts the patch.

the kernel has a steady stream of incoming patches, and surely a number of bugs in them to work with.

yeah it would've cost more, but would've also generated significant value for the kernel.

  • The point of the research isn't to study bugs, it's to study hypocrite commits. Given that a hypocrite commit requires intention, there's no other way except to submit commits yourself as the submitter would obviously know their own intention.