← Back to context

Comment by henearkr

4 years ago

Auditability is at the core of its advantage over closed development.

Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

To adress your first critic: benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm. Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

> Auditability is at the core of its advantage over closed development.

That's an assertion. A hypothesis is verified through observing the real world. You can do that in many ways, giving you different confidence levels in validity of the hypothesis. Research such as the one we're discussing here is one of the ways to produce evidence for or against this hypothesis.

> Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

It is if there's a review process. Auditability itself is really most interesting before a patch is accepted. Sure, it's nice if vulnerabilities are found eventually, but the longer that takes, the more likely it is they were already exploited. In case of an intentionally bad patch in particular, the window for reverting it before it does most of its damage is very small.

In other words, the experiment wasn't testing the entire auditability hypothesis. Just the important part.

> benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm

Sure. But the project scope matters. Linux kernel isn't some random OSS library on Github. It's core infrastructure of the planet. Assumption of benevolence works as long as the interested community is small and has little interest in being evil. With infrastructure-level OSS projects, the interested community is very large and contains a lot of malicious actors.

> Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

I agree, and in my books, if a legitimate researcher gets banned for such "undercover" research, it's just the flip side of doing such experiment.

  • I will not adress everything but only this point:

    Before a patch is accepted, "auditability" is the same in OSS vs in proprietary, because both pools of engineers in the review groups have similar qualifications and approximatively the same number of people are involved.

    So, the real advantage of OSS is on the auditability after the patch is integrated.

    • > So, the real advantage of OSS is on the auditability after the patch is integrated.

      If that's the claim, then the research work discussed here is indeed not relevant to it.

      But also, if that's the claim, then it's easy to point out that the "advantage" here is hypothetical, and not too important in practice. Most people and companies using OSS rely on release versions to be stable and tested, and don't bother doing their own audit. On the other hand, intentional vulnerability submission is an unique threat vector that OSS has, and which proprietary software doesn't.

      It is therefore the window between patch submission and its inclusion in a stable release (which may involve accepting the patch to a development/pre-release tree), that's of critical importance for OSS - if vulnerabilities that are already known to some parties (whether the malicious authors or evil onlookers) are not caught in that window, the threat vector here becomes real, and from a risk analysis perspective, negates some of the other benefits of using OSS components.

      Nowhere here I'm implying OSS is worse/better than proprietary. As a community/industry, we want to have an accurate, multi-dimensional understanding of the risks and benefits of various development models (especially when applied to core infrastructure project that the whole modern economy runs on). That kind of research definitely helps here.

      2 replies →

If the model assumes benevolence how can it possibly be viable long-term?

  • Like that: malevolent actors are banned as soon as detected.

    • What do you suppose is the ratio of undetected bad actors / detected bad actors? If it is anything other than zero I think the original point holds.

  • Most everything boils down to trust at some point. That human society exists is proof that people are, or act, mostly, "good", over the long term.

    • > That human society exists is proof that people are, or act, mostly, "good", over the long term.

      That's very true. It's worth noting that various legal and security tools deployed by the society help us understand what are the real limits to "mostly".

      So for example, the cryptocurrency crowd is very misguided in their pursuit of replacing trust with math - trust is the trick, the big performance hack, that allowed us to form functioning societies without burning ridiculous amounts of energy to achieve consensus. On the other hand, projects like Linux kernel, which play a core role in modern economy, cannot rely on assumption of benevolence alone - incentives for malicious parties to try and mess with them are too great to ignore.