Comment by d110af5ccf

4 years ago

> Why should they waste their time with extra scrutiny next time?

Because well funded malicious actors (government agencies, large corporations, etc) exist and aren't so polite as to use email addresses that conveniently link different individuals from the group together. Such actors don't publicize their results, aren't subject to IRB approval, and their exploits likely don't have such benign end goals.

As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.

we don't have the full communication and I understand that the intention is to be stealthy (why use an university email that can be linked to the previous research then?). However the researcher's response seems to be disingenuous:

> I sent patches on the hopes to get feedback. We are not experts in the Linux kernel and repeatedly making these statements is disgusting to hear.

this is after they're caught, why continue lying instead of apologizing and explain? Is the lying also part of the experiments?

On top of that, they played cards, you can see why people would be triggered by this level of dishonesty:

> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies

  • From reading other comments about the context surrounding these events, it sounds to me like this probably was an actual newbie who made an honest (if lazy) mistake and was then caught up in the controversy surrounding his advisor's past research.

    Or perhaps it really is a second attempt by his advisor at an evil plot to sneak more buggy patches into the kernel for research purposes? Either way, the response by the maintainers seems rather disproportionate to me. And either way, I'm ultimately grateful for the (apparently unwanted?) attention being drawn to the (apparent lack of) security surrounding the Linux kernel patch review process.

    • > it sounds to me like this probably was an actual newbie who made an honest (if lazy) mistake

      Who then replies with a request for "cease and desist"? Not sure that's the right move for a humble newbie.

They should not have experimented on human subjects without consent, regardless of whether the result is considered benign.

Yes, malicious actors have a head start, because they don't care about the rules. It doesn't mean that we should all kick the rules, and compete with malicious actors on this race to the bottom.

  • I'm not aware of any law requiring consent in cases such as this, only conventions enforced by IRBs and journal submission requirements.

    I also don't view unannounced penetration testing of an open source project as immoral, provided it doesn't consume an inordinate amount of resources or actually result in any breakage (ie it's absolutely essential that such attempts not result in defects making it into production).

    When the Matrix servers were (repeatedly) breached and the details published, I viewed it as a Good Thing. Similarly, I view non-consensual and unannounced penetration testing of the Linux kernel as a Good Thing given how widely deployed it is. Frankly I don't care about the sensibilities of you or anyone else - at the end of the day I want my devices to be secure and at this point they are all running Linux.

    • I don’t see where I claim that this is a legal matter. There are many things which are not prohibited by law that you can do to a fellow human being that are immoral and might result in them blacklisting you forever.

      That you care about something or not also seems to be irrelevant, unless you are part of either the research or the kernel maintainers. It’s not about your or my emotional inclination.

      Acquiring consent before experimenting in human subject is an ethical requirement for research, regardless of whether is a hurdle for the researchers. There is a reason that IRB exists.

      Not to mention that they literally proved nothing, other than that vulnerable patches can be merged into the kernel. But did anybody that such a threat is impossible anyway? The kernel has vulnerabilities and it will continue to have them. We already knew that.

    • >I view non-consensual and unannounced penetration testing of the Linux kernel as a Good Thing...

      So what other things do you think appropriate to not engage in acquiring consent to do based on some perceived justification of ubiquity? It's a slippery slope all the way down, and there is a reason for all the ceremony and hoopla involved in this type of thing. If you cannot demonstrate mastery of doing research on human subjects and processes the right way, and show you've done your footwork to consider the impact of not doing it that way (i.e. IRB fully engaged, you've gone out of your way to make sure they understand, and at least reached out to one person in the group under test to give a surreptitious heads up (like Linus)), you have no business playing it fast and loose, and you absolutely deserve censure.

      No points awarded for half-assing. Asking forgiveness may oft times be easier than asking permission, but in many areas, the impact to doing so goes far beyond mere inconvenience to the researcher in the costs it can extract.

      >at the end of the day I want my devices to be secure and at this point they are all running Linux.

      That is orthogonal to the outcome of the research that was being done, as by definition running Linux would include running with a new vulnerability injected. What you really want is to know your device is doing what you want it to, and none of what you don't. Screwing with kernel developers does precious little to accomplish that. Same logic applies with any other type of bug injection or intentioned software breakage.

    • > I'm not aware of any law requiring consent in cases such as this

      In the same way there is no law requiring Linux kernel maintainers to review patches send by this university.

      "it was not literally illegal" is not a good reasoning for why someone should not be banned.

> As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.

This "attack" did not reveal anything interesting. It's not like any of this was unknown. Of course you can get backdoors in if you try hard enough. That does not surprise anybody.

Imagine somebody goes with an axe, breaks your garage door, poops on your Harley, leaves, and then calls you and tells you "Oh, btw, it was me. I did you a service by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of your property. Thank me later." And then they expect you to get let in when you have a party.

It doesn't work that way. Of course the garage door can be broken with an axe. You don't need a "mildly sophisticated attack" to illustrate that while wasting everybody's time.