← Back to context

Comment by dopamean

7 years ago

I have a completely honest question here that I'm hoping some people can answer. Is open source really more secure? My default answer would be yes absolutely but when I think about it I'm not sure I understand why.

If something is open source then bugs and security problems can be found more easily and then fixed. This sounds great to me and I'm sure that works out just fine most of the time. This makes me wonder though...are there really fewer intrusions into production systems that are built entirely on open source software than there are in ones built with lots of proprietary, closed source software? What does the data look like about this stuff?

I can't speak to the data analysis part, though I do believe some people have looked into it, and hopefully they can add their thoughts.

From my experience, the answer is: it depends very much on the community the project has.

First, the obvious positives: you could have lots of people with lots of different kinds of experience looking at the code, finding and fixing things.

This is how I got involved in Firebug back in the day. But I also noticed that while millions of developers used it daily, the number that got all the way to the issue reporter were small, and the number that posted fixes in an issue were minimal (I got to know them by name). Only once do I remember a security issue being reported, considering that extensions had such broad and unlimited access back in then.

So, if it does not invite that kind of community, then it is possible to be a net negative with only blackhats having a reason to inspect the code. OR, you have a social problem within the community (also common), where people assume that with such a large community, surely someone looked at X. Everyone thinks that, so no one looks at X. Years later someone does and finds some surprising things in code that withstood the test of time.

That said, I think the case of UEFI would be different. It might be a good candidate for shared source at least, if it isn't already.

I guess it's the principle of "many eyes make all bugs shallow".

If the source is freely available, then every day someone is going to read it and maybe see/fix the bug.

You can't know what bugs are in code for which you do not have the source, and the pool of people reading it is likely to be much smaller.

  • >If the source is freely available, then every day someone is going to read it and maybe see/fix the bug.

    How many years did Heartbleed go unnoticed? How many exploits in open source software get reported here?

    It's not true that someone reads all of the open source code every day. The truth is, few people ever read any of it, and fewer still have the domain expertise necessary to be able to spot and patch any obvious bug, much less subtle ones. And yet this metaphysical belief in the "many eyes" persists.

    Sure, it exists, but there are supposed to be eyes on the proprietary code as well, and the effect is probably smaller than people think, with no one outside of a project's maintainers ever actually studying the code for most open source projects.

    • I'd like to add one thing to this: Heartbleed also went unnoticed because the OpenSSL code ad build process was in such a state that simply looking at it, and having to build it costs an insane amount of effort.

      So if you truly want to benefit from open source firmware, it needs to also come in at least some minimal form of quality. Things such as good build documentation, automated builds in CI, and also low requirements for setting up development builds are a thing often not present in software all of us deem critical.

      It is much more intriguing to contribute to a project, use it and submit improvements when the entrance barriers are low.

Open source software is just software. That is to say it is just as secure, or insecure, as any other available software.

The open source model, however, allows for incremental improvements, patching, security updates and auditing from the community that the typical closed source model neglects to provide.

> Is open source really more secure?

I think the trend now is to believe that closed software that is actively maintained by a well resourced party is more secure than open software that is barely maintained by whoever contributes.

Binary blobs for hardware that has long shipped doesn't really fall into the "actively maintained" category. At least not reliably.

It seems like it depends on your threat model. If what your company is doing is valuable enough and you have a large enough organization, a motivated attacker will have access to the system’s source to run their offline analysis of it, regardless.

Background checks and interviews aren’t much of a barrier…

> Is open source really more secure?

Probably not, but ...

The issue is that open source can generally be patched by a sufficiently motivated individual when the security hole is found. If you have a proprietary firmware blob, that isn't going to happen unless there is monetary incentive for the manufacturer to do so.

As long as you remember to update.

Let's not forget that each security fix made to Open Source software is also a recipe on how to pwn people who didn't update to that fix yet. A project changelog is in part a list of holes that can be exploited.