Comment by nomel

4 years ago

It was a real world penetration test that showed some serious security holes in the code analysis/review process. Penetration tests are always only as valuable as your response to them. If they chose to do nothing about their code review/analysis process, with these vulnerabilities that made it in (intentional or not), then yes, the exercise probably wasn't valuable.

Personally, I think all contributors should be considered "bad actors" in open source software. NSA, some university mail address, etc. I consider myself a bad actor, whenever I write code with security in mind. This is why I use fuzzing and code analysis tools.

Banning them was probably the correct action, but not finding value requires intentionally ignoring the very real result of the exercise.

I agree. They should take this as a learning opportunity and see what can be done to improve security and detect malicious code being introduced into the project. What's done is done, all that matters is how you proceed from here. Banning all future commits from UMN was the right call. I mean it seems like they're still currently running follow up studies on the topic.

However I'd also like to note that in a real world penetration test on an unwitting and non-consensual company, you also get sent to jail.

Everybody wins! The team get valuable insight on the security of the current system and unethical researchers get punished!

  • A non-consensual pentest is called a "breach". At that point it's no longer testing, just like smashing a window and entering your neighbour's house is not a test of their home security system but just breaking and entering.

A real world penetration test is coordinated with the entity being tested.

  • Yeah - and usually stops short of causing actual damage.

    You don't get to rob a bank and then when caught say "you should thank us for showing your security weaknesses".

    In this case they merged actual bugs and now they have to revert that stuff which depending on how connected those commits are to other things could cost a lot of time.

    If they were doing this in good faith, they could have stopped short of actually letting the PRs merge (even then it's rude to waste their time this way).

    This just comes across to me as an unethical academic with no real valuable work to do.

    • > You don't get to rob a bank and then when caught say "you should thank us for showing your security weaknesses".

      Yeah, there’s a reason the US response to 9/11 wasn’t to name Osama bin Laden “Airline security researcher of the Millenium”, and it isn’t that “2001 was too early to make that judgement”.

    • But bad people don’t follow some mythical ethical framework and announce they’re going to rob the bank prior to doing it. There absolutely are pen tests conducted where only a single person out of hundreds is looped in. Is it unethical for supervisors to subject their employees and possibly users to those such environments? Since you can’t prevent this behavior at large, I take solace that it happened in a relatively benign way rather than having been done by a truly malicious actor. No civilians were harmed in the demonstration of the vulnerability. Security community doesn't get to have their cake and eat it too. All this responsible disclosure “ethics” is nonsense. This is full disclosure, it’s how the world actually works. The response from the maintainers to me indicates they are frustrated at the perceived waste of their time, but to me this seems like a justified use of human resources to draw attention to a real problem that high profile open source projects face. If you break my trust I’m not going to be happy either and will justifiably not trust you in the future, but trying to apply some ethical framework to how “good bad actors” are supposed to behave is just silly IMO. And the “ban the institution” feels more like an “I don't have time for this” retaliation than an “I want to effectively prevent this behavior in the future” response that addresses the reality. For all we know Linus and Greg could have and still might be onboard with the research and we’re just seeing the social elements of the system now tested. My main point is maybe do a little more observing and less condemning. I find the whole event to be a fascinating test of one of the known vulnerabilities large open source efforts face.

      6 replies →

The result is to make sure not to accept anything with the risk of introducing issues.

Any patch coming from somebody having intentionally introduced an issue falls into this category.

So, banning their organization from contributing is exactly the lesson to be learned.

  • I agree, but I would say the better result, most likely unachievable now, would be to fix the holes that required a humans feelings to ensure security. Maybe some shift towards that direction could result from this.

Next time you rob a bank, try telling the judge it was a real world pentest. See how well that works out for you.

> It was a real world penetration test that showed some serious security holes in the code analysis/review process.

So you admit it was a malicious breach? Of course it isn't a perfect process. Everyone knows it isn't absolutely perfect. What kind of test is that?