Comment by mk89
4 years ago
I have the same questions. So far we have focused on how bad these "guys" are. Sure, they could have done it differently, etc. However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.
How to solve this "issue" without putting too much process around it? That's the challenge.
What's next, will they prove how easy it is to break into kernel developers' houses and rob them? Or prove how easy it is to physically assault kernel developers by punching them in the face at conferences? Or prove how easy it is to manipulate kernel developers to lose their life savings investing in cryptocurrency? You can count me out of those...
Sarcasm aside, pentesting/redteaming is only ethical if the target consents to it! Please don't try to prove your point the way these researchers have.
Just playing devil advocate here, the surprising factor does play into it. No bad actor will ever give you heads-up.
If the researcher has sent these patches under a different identity, that would be just like how malice contributions appear. The maintainers won't assume malice, waste a bunch of time communicating with the bad actor, and may NOT revert their previous potentially harmful contribution.
> the surprising factor does play into it. No bad actor will ever give you heads-up.
I too thought like this till yesterday. Then someone made me realize thats not how getting consent works in these situations. You take consent from higher up the chain, not the people doing the work. So Greg Kroah-Hartmancould could have been consulted, as he would not be personally reviewing this stuff. This would also give you a chance to understand how the release process works. You also have an advantage over the bad actors because they would be studying the process from outside.
3 replies →
> No bad actor will ever give you heads-up.
Yes, and if you do it without a heads-up as well that makes you a bad actor. This university is a disgrace and that's what the problem is and should remain.
C'est la vie. There are many things that it would be interesting to know, but the ethics of it wouldn't play out. It would be interesting to see how well Greg Kroah-Hartman resists under torture, but that does not mean it is acceptable to torture him to see if he would commit malicious patches that way.
To take a more realistic example, we could quickly learn a lot more than today about language acquisition if we could separate a few children from any human contact to study how they learn from controlled stimuli. Still, we don't do this research and look for much more complicated and lossy, but more humane, methods to study the same.
They proved nothing that wasn't already obvious. A malicious actor can get in vulnerabilities the same way a careless programmer can. Quick, call the press!
And as for the solutions, their contribution is nil. No suggestions that haven't been suggested, tried and done or rejected a thousand times over.
Agreed. So many security vulnerabilities have been created not by malicious actors, but by people who just weren't up to task. Buggy software and exhausted maintainers is nothing new.
What this proves to me is that perhaps lightweight contributions to the kernel should be done in safe languages that prevent memory leaks and with tooling that actively highlights memory safety issues like use after free. Broader rust adoption in the kernel cant come soon enough.
I also consider Greg’s response just as much a test of UMN’s internal processes as the researcher’s attempt at testing kernel development processes. Hopefully there will be lessons learned on both sides and this benign incident makes the world better. Nobody was hurt here.
16 replies →
This is for me unrelated and even a little bit minimizing the issue here.
The purpose of the research was probably to show how easy it is to manipulate the Linux kernel in bad faith. And they did it. What are they gonna do about it besides banning the university?
1 reply →
So you are just fine knowing that any random guy can sneak any code in the Linux kernel? Honestly, I was personally expecting a higher level of review and attention to such things, considering how used the product is. I don't want to look like the guy that doesn't appreciate what the OSS and FSF communities do everyday even unpaid. However this is unrelated. And probably this is what the researchers tried to prove (with unethical and wrong behavior).
I'm not fine with it. But those researchers are not helping at all.
And also, if I had to pick between a somewhat inclusive mode of work where some rando can get code included at the slightly increased risk of including malicious code, and a tightly knit cabal of developers mistrusting all outsiders per default: I would pick the more open community.
If you want more paranoia, go with OpenBSD. But even there some rando can get code submitted at times.
If you've ever done code review on a complex product, it should be quite obvious that the options are either to accept that sometimes bugs will make it in, or to commit once per week or so (not per person, one commit per week to the Linux kernel overall), once every possible test has been run on that commit.
1 reply →
> So you are just fine knowing that any random guy can sneak any code in the Linux kernel?
I mean, it is no surprise. It is even worse with proprietary software, because you are much less likely to be aware of your own college/employee.
Hell, seeing that the actual impact is overblown in the paper, I think it is a really great percentage caught to be honest, assuming good faith from the contributor.
> However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.
What? Are you actually trying to argue that "researchers" proved that code reviews don't have a 100% success rate in picking up bugs and errors?
Specially when code is pushed in bad faith?
I mean, think about that for a minute. There are official competitive events to sneak malicious code that are already decades old and going strong[1]. Sneaking vulnerabilities through code reviews is a competitive sport. Are we supposed to feign surprise now?
[1] https://en.wikipedia.org/wiki/Underhanded_C_Contest
Bug bounties are a different beast. Here we are talking about a bunch of guys who deliberately put stuff into your next kernel release because they come from an important university, or whatever other reason. One of the reviewers in the thread admitted that they need to pay more attention to code reviews. That sounds to me like a good first step towards solving this issue. Is that enough, though? It's an unsolvable problem, but is the current solution enough?
> Bug bounties are a different beast.
Bug bounties are more than a different beast: they are a strawman.
Sneaking vulnerabilities through a code review is even a competitive sport, and it has zero to do with bug bounties.
1 reply →