Comment by goodpoint

4 years ago

> it’s commonly argued that open source is better for security because there are more eyes on it.

> What this research demonstrates is that you can quite easily slip back doors into an open contribution

To make a fair comparison you should contrast it with companies or employees placing a backdoors into their own closed source software.

It's extremely easy to do and equally difficult to spot for end users.

Recruiting a rogue employee is orders of magnitude harder than receiving ostensibly benign patches in emails from Internet randos.

Rogue companies/employees is really a different security problem that’s not directly comparable to drive-by patches (the closest comparison is a rogue open source maintainer).

  • Maybe for employees, but usually it is a contractor of a contractor in some outsourced department replacing your employees. I'd argue that in such common situations, you are worse off than with randos on the internet sending patches, because no-one will ever review what those contractors commit.

    Or you have a closed-source component you bought from someone who pinky-swears to be following secure coding practices and that their code is of course bug-free...

  • The reward for implanting a rogue employee is orders of magnitude higher, with the ability to plant backdoors or weaken security for decades.

    And that's why nation-state attackers do it routinely.

    • Yes, it’s a different problem that’s way less likely to happen and potentially more impactful, hence not comparable. And entities with enough resources can do the same to open source, except with more risk; how much more is very hard to say.

      1 reply →

To make it a fair comparison you should contrast... an inside job with an outside job?

  • This is an arbitrary definition of inside vs outside. You are implying that employees are trusted and benign and other contributors are high-risk, ignoring than an "outside" contributor might be improving security with bug reports and patches.

    For the end user, the threat model is about the presence of a malicious function in some binary.

    Regardless if the developers are an informal community, a company, a group of companies, an NGO. They are all "outside" to the end user.

    Closed source software (e.g. phone apps) breach user's trust constantly, e.g. with privacy breaching telemetries, weak security and so on.

    If Microsoft weakens encryption under pressure from NSA is it "inside" or "outside"? What matters to end users is the end result.

    • The insiders are the maintainers. The outsiders are everyone else. If this is an arbitrary definition to you I... don't know what to tell you.

      There's absolutely no reason everyone's threat model has to equate insiders with outsiders. If a stranger on the street gives you candy, you'll probably check it twice or toss it away out of caution. If a friend or family member does the same thing, you'll probably trust them and eat it. Obviously at the end of the day, your concern is the same: you not getting poisoned. That doesn't mean you can (or should...) treat your loved ones like they're strangers. It's outright insane for most people to live in that manner.

      Same thing applies to other things in life, including computers. Most people have some root of trust, and that usually includes their vendors. There's no reason they have to trust you and (say) Microsoft employees/Apple employees/Linux maintainers equally. Most people, in fact, should not do so. (And this should not be a controversial position...)

      1 reply →