← Back to context

Comment by lmm

2 years ago

> By this definition, bugs like Heartbleed were indeed deep: they were not randomly found by a million eyeballs in a few days, they were were found by months of careful scrutiny by experts.

> The fact that a bug can be fixed with a one line change doesn't mean that it's a shallow bug. A one-line bug can induce a rare, hard to observe behavior in very corner circumstances. Even if users are hitting it, the bug reports can be entirely useless at face value: they can look like a crash that someone saw once with no idea what was special that one time (I bet you would find OpenSSL bug reports like this that were symptoms of Heartbleed going back much longer).

> This is why you need a complex and time-consuming QA process to even identify the long tail of deep bugs, which no amount of casual eyeballs will replace.

This is the point in dispute. As far as I can see Heartbleed did not require any special knowledge of the codebase; a drive-by reviewer taking a look at that single file had just as much chance of finding the bug as a dedicated maintainer familiar with the specific codebase. The fact that it was discovered independently by two different teams, at least one of which was doing a general security audit rather than specifically targetting OpenSSL, supports that.

The fact that it was only found 2 years after being introduced (by non-attackers at least), in one of the most used pieces of software in the world, suggests that it wasn't actually shallow by any definition.

I don't think it's relevant that it could have been found by anyone. We know empirically that it just wasn't. It was found by security auditors, which is just about as far as you can be from a random eyeball.

Edit: An even more egregious example is of course the ~20 year old Shellshock family of bugs in bash.

  • > The fact that it was only found 2 years after being introduced (by non-attackers at least), in one of the most used pieces of software in the world, suggests that it wasn't actually shallow by any definition.

    Or that few people were looking.

    > I don't think it's relevant that it could have been found by anyone. We know empirically that it just wasn't. It was found by security auditors, which is just about as far as you can be from a random eyeball.

    It was found by security people with security skills. But those were not people closely associated with the OpenSSL project; in fact as far as I can see they weren't prior contributors or project members at all. That very much supports ESR's argument.

    • > It was found by security people with security skills. But those were not people closely associated with the OpenSSL project; in fact as far as I can see they weren't prior contributors or project members at all. That very much supports ESR's argument.

      It doesn't. ESR's argument suggests OpenSSL should not hire security researchers to look for bugs, since all bugs are shallow and people will just quickly find them - the Bazaar approach.

      What Heartbleed has shown is that the OpenSSL project would be much higher quality if it took a more Cathedral-like approach and actively look for security researchers to work on it, and make them a part of their release process. Because they didn't, they released with a critical security vulnerability for more than a year (and there are very possibly many others).

      Especially in security, it's clear that some bugs are deep. Any project that cares about security actually has to follow a cathedral-like approach to looking for them. Releasing security critical code early is only making the problem worse, not better.

      7 replies →

    • If "few people were looking" at OpenSSL, one of the most widely-used pieces of open source software in the entire industry, Eric Raymond's point is refuted.

      2 replies →