← Back to context

Comment by alpaca128

4 years ago

> having the gall to brag about it is a new low

Even worse: They bragged about it, then sent a new wave of buggy patches to see if the "test subjects" fall for it once again, and then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

This is thinly veiled and potentially dangerous bullying.

> This is thinly veiled and potentially dangerous bullying.

Which itself could be the basis of a follow up research paper. The first one was about surreptitiously slipping vulnerabilities into the kernel code.

There's nothing surreptitious about their current behavior. They're now known bad actors attempting to get patches approved. First nonchalantly, and after getting called out and rejected they framed it as an attempt at bullying by the maintainers.

If patches end up getting approved, everything about the situation is ripe for another paper. The initial rejection, attempting to frame it as bullying by the maintainers (which ironically, is thinly veiled bullying itself), impact of public pressure (which currently seems to be in the maintainers' favor, but the public is fickle and could turn on a dime).

Hell, even if the attempt isn't successful you could probably turn it into another paper anyway. Wouldn't be as splashy, but would still be an interesting meta-analysis of techniques bad actors can use to exploit the human nature of the open source process.

  • Yep, while the downside is that it wastes maintainers’ time and they are rightfully annoyed, I find the overall topic fascinating not repulsive. This is a real world red team pen test on one of the highest profile software projects. There is a lot to learn here all around! Hope the UMN people didn't burn goodwill by being too annoying, though. Sounds like they may not be the best red team after all...

    • A good red team pentest would have been to just stop after the first round of patches, not to try again and then cry foul when they get rightfully rejected. Unless, of course, social denunciation is part of the attack- and yes, it's admittedly a pretty good sidechannel- but that's a rather grisly social engineering attack, wouldn't you agree?

    • A real world red team?

      Wouldn't the correct term for that be: malicious threat actor?

      Red team penetration testing doesn't involve the element of surprise, and is pre-arranged.

      Intentionally wasting peoples time, and then going further to claim you weren't, is a malicious act as it intends to do harm.

      I agree though, it's fascinating but only in the true crime sense.

      1 reply →

  • I agree. If it quacks like a duck and waddles like a duck, then it is a duck. Anyone secretly introducing exploitable bugs in a project is a malicious threat actor. It doesn't matter if it is a "respectable" university or a teenager, it matters what they _do_.

    • They did not secretly introduce exploitable bugs:

      Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

      https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

      > If it quacks like a duck and waddles like a duck, then it is a duck.

      A lot of horrible things have happened on the Internet by following that philosophy. I think it's imperative to learn the rigorous facts and different interpretations of them, or we will continue to great harm and be easily manipulated.

  • > Which itself could be the basis of a follow up research paper.

    Seems more like low grade journalism to me.

  • But the first paper is a Software Engineering paper (social-exploit-vector vulnerability research), while the hypothetical second paper would be a Sociology paper about the culture of FOSS. Kind of out-of-discipline for the people who were writing the first paper.

    • There's certainly a sociology aspect to the whole thing, but the hypothetical second paper is just as much social-exploit-vector vulnerability research as the first one. The only change being the state of the actor involved.

      The existing paper researched the feasibility of unknown actors to introduce vulnerable code. The hypothetical second paper has the same basis, but is from the vantage point of a known bad actor.

      Reading through the mailing list (as best I can), the maintainer's response to the latest buggy patches seemed pretty civil[1] in general, and even more so considering the prior behavior. And the submitter's response to that (quoted here[2]) went to the extreme end of defensiveness. Instead of addressing or acknowledging anything in the maintainer's message, the submitter:

      - Rejected the concerns of the maintainer as "wild accusations bordering on slander"

      - Stating their naivety of the kernel code, establishing themselves as a newbie

      - Called out the unfriendliness of the maintainers to newbies and non-expects

      - Accused the maintainer of having preconceived biases

      An empathetic reading of their response is that they really are a newbie trying to be helpful and got defensive after feeling attacked. But a cynical reading of their response is that they're attempting to exploiting high-visibility social issues to pressure or coerce the maintainers into accepting patches from a known bad actor.

      The cynical interpretation is as much social-exploit-vector vulnerability research as what they did before. Considering how they deflected on the maintainer's concerns stemming from their prior behavior and immediately pulled a whole bunch of hot-button social issues into the conversation at the same time, the cynical interpretation seems at least plausible.

      [1] https://lore.kernel.org/linux-nfs/YH5%2Fi7OvsjSmqADv@kroah.c...

      [2] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

And they tried to blow the "preconceived biases" dog whistle. I read that as a threat.

It isn't even bullying. It is just dumb?

Fortunately, the episode also suggests that the kernel-development immune-system is fully-operational.

  • Not sure. From what I read they've successfully introduced a vulnerability in their first attempt. Would anyone have noticed if they didn't call more attention to their activities?

    • Can you point to this please? From my reading, it appears that their earlier patches were merged, but there is no mention of them being actual vulnerabilities. The lkml thread does mention they want to revert these patches, just in case.

      5 replies →

There are some activities that should be "intimidating to newbies" though, shouldn't there? I can think of a lot of specific examples, but in general, anything where significant preparation is helpful in avoiding expensive (or dangerous) accidents. Or where lack of preparation (or intentional "mistakes" like in this case) would shift the burden of work unfairly onto someone else. Also, a "newbie" in the context of Linux system programming would still imply reasonable experience and skill in writing code, and in checking and testing your work.

I'm gonna go against the grain here and say I don't think this is a continuation of the original research. It'd be a strange change in methodology. The first paper used temporary email addresses, why switch to a single real one? The first paper alerted maintainers as soon as patches were approved, why switch to allowing them to make it through to stable? The first paper focused on a few subtle changes, why switch to random scattershot patches? Sure, this person's advisor is listed as a co-author of the first paper, but that really doesn't imply the level of coordination that people are assuming here.

  • It doesn't really matter that he/they changed MO, because they've already shown to be untrustworthy. You can only get the benefit of the doubt once.

    I'm not saying people or institutions cant change. But the burden of proof is on them now to show that they did. A good first step would be to acknowledge that there IS a good reason for doubt, and certainly not whine about 'preconceived bias'.

  • They had already done it once without asking for consent. At least in my eye, that makes them—everyone in the team—lose their credibility. Notifying the kernel maintainers afterwards is irrelevant.

    It is not the job of the kernel maintainers to justify the teams new nonsense patches. If the team has stopped being bullshit, they should defend the merit of their own patches. They have failed to do so, and instead tried to deflect with recriminations, and now they are banned.

At this point how do you even make the difference between their genuine behavior and the behavior that is part of the research?

  • I would say that, from the point of view of the kernel maintainers, that question is irrelevant, as they never agreed to taking part in any research so. Therefore, from their perspective, all the behaviour is genuinely malevolent regardless of the individual intentions of each UMN researcher.

    • It does prevent anyone with a umn.edu email address, be it a student or professor, of submitting patches of _any kind,_ even if they're not part of research at all. A professor might genuinely just find a bug in the Linux kernel running on their machines, fix it, and be unable to submit it.

      To be clear, I don't think what the kernel maintainers did is wrong; it's just sad that all past and future potentially genuine contributions to the kernel from the university have been caught in the crossfire.

      11 replies →

  • It would be hard to show this wasn’t genuine behaviour but a malicious attempt to infect the Linux kernel. That still doesn’t give them a pass though. Academia is full of copycat “scholars”. Kernel maintainers would end up wasting significant chunks of their time fending off this type of “research”.

    • The kernel maintainers don't need to show or prove anything, or owe anyone an explanation. The University's staff/students are banned, and their work will be undone within a few days.

      The reputational damage will be lasting, both for the researchers, and for UMN.

  • One could probably do a paper about evil universities doing stupid things.. anyway evil actions are evil regardless of the context, research 100-yrs ago was intentionally evil without being questioned, today ethics should filter what research should be done or not

>then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

As soon as I read that all sympathy for this clown was out the window. He knows exactly what he's doing.

Why not just call it what it is: fraud. They tried to deceive the maintainers into incorporating buggy code under false pretenses. They lied (yes, let's use that word) about it, then doubled down about the lie when caught.

This looks a very cynical attempt to leverage PC language to manipulate people. Basically a social engineering attack. They surely will try to present it as pentest, but IMHO it should be treated as an attack.

I don't see any sense in which this is bullying.

  • I come to your car, cut your breaks, tell you just before you go on a ride, say it's just research and I will repair them. What would you call a person like that?