← Back to context

Comment by beshrkayali

4 years ago

This seems like a pretty scummy way to do "research". I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low. It's not that they're doing this, I'm sure they're not the first to think of this (for research or malicious reasons), but having the gall to brag about it is a new low.

> having the gall to brag about it is a new low

Even worse: They bragged about it, then sent a new wave of buggy patches to see if the "test subjects" fall for it once again, and then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

This is thinly veiled and potentially dangerous bullying.

  • > This is thinly veiled and potentially dangerous bullying.

    Which itself could be the basis of a follow up research paper. The first one was about surreptitiously slipping vulnerabilities into the kernel code.

    There's nothing surreptitious about their current behavior. They're now known bad actors attempting to get patches approved. First nonchalantly, and after getting called out and rejected they framed it as an attempt at bullying by the maintainers.

    If patches end up getting approved, everything about the situation is ripe for another paper. The initial rejection, attempting to frame it as bullying by the maintainers (which ironically, is thinly veiled bullying itself), impact of public pressure (which currently seems to be in the maintainers' favor, but the public is fickle and could turn on a dime).

    Hell, even if the attempt isn't successful you could probably turn it into another paper anyway. Wouldn't be as splashy, but would still be an interesting meta-analysis of techniques bad actors can use to exploit the human nature of the open source process.

    • Yep, while the downside is that it wastes maintainers’ time and they are rightfully annoyed, I find the overall topic fascinating not repulsive. This is a real world red team pen test on one of the highest profile software projects. There is a lot to learn here all around! Hope the UMN people didn't burn goodwill by being too annoying, though. Sounds like they may not be the best red team after all...

      4 replies →

    • I agree. If it quacks like a duck and waddles like a duck, then it is a duck. Anyone secretly introducing exploitable bugs in a project is a malicious threat actor. It doesn't matter if it is a "respectable" university or a teenager, it matters what they _do_.

      1 reply →

    • > Which itself could be the basis of a follow up research paper.

      Seems more like low grade journalism to me.

    • But the first paper is a Software Engineering paper (social-exploit-vector vulnerability research), while the hypothetical second paper would be a Sociology paper about the culture of FOSS. Kind of out-of-discipline for the people who were writing the first paper.

      1 reply →

  • It isn't even bullying. It is just dumb?

    Fortunately, the episode also suggests that the kernel-development immune-system is fully-operational.

    • Not sure. From what I read they've successfully introduced a vulnerability in their first attempt. Would anyone have noticed if they didn't call more attention to their activities?

      7 replies →

  • There are some activities that should be "intimidating to newbies" though, shouldn't there? I can think of a lot of specific examples, but in general, anything where significant preparation is helpful in avoiding expensive (or dangerous) accidents. Or where lack of preparation (or intentional "mistakes" like in this case) would shift the burden of work unfairly onto someone else. Also, a "newbie" in the context of Linux system programming would still imply reasonable experience and skill in writing code, and in checking and testing your work.

  • I'm gonna go against the grain here and say I don't think this is a continuation of the original research. It'd be a strange change in methodology. The first paper used temporary email addresses, why switch to a single real one? The first paper alerted maintainers as soon as patches were approved, why switch to allowing them to make it through to stable? The first paper focused on a few subtle changes, why switch to random scattershot patches? Sure, this person's advisor is listed as a co-author of the first paper, but that really doesn't imply the level of coordination that people are assuming here.

    • It doesn't really matter that he/they changed MO, because they've already shown to be untrustworthy. You can only get the benefit of the doubt once.

      I'm not saying people or institutions cant change. But the burden of proof is on them now to show that they did. A good first step would be to acknowledge that there IS a good reason for doubt, and certainly not whine about 'preconceived bias'.

    • They had already done it once without asking for consent. At least in my eye, that makes them—everyone in the team—lose their credibility. Notifying the kernel maintainers afterwards is irrelevant.

      It is not the job of the kernel maintainers to justify the teams new nonsense patches. If the team has stopped being bullshit, they should defend the merit of their own patches. They have failed to do so, and instead tried to deflect with recriminations, and now they are banned.

  • At this point how do you even make the difference between their genuine behavior and the behavior that is part of the research?

    • I would say that, from the point of view of the kernel maintainers, that question is irrelevant, as they never agreed to taking part in any research so. Therefore, from their perspective, all the behaviour is genuinely malevolent regardless of the individual intentions of each UMN researcher.

      31 replies →

    • It would be hard to show this wasn’t genuine behaviour but a malicious attempt to infect the Linux kernel. That still doesn’t give them a pass though. Academia is full of copycat “scholars”. Kernel maintainers would end up wasting significant chunks of their time fending off this type of “research”.

      1 reply →

    • One could probably do a paper about evil universities doing stupid things.. anyway evil actions are evil regardless of the context, research 100-yrs ago was intentionally evil without being questioned, today ethics should filter what research should be done or not

  • >then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

    As soon as I read that all sympathy for this clown was out the window. He knows exactly what he's doing.

  • Why not just call it what it is: fraud. They tried to deceive the maintainers into incorporating buggy code under false pretenses. They lied (yes, let's use that word) about it, then doubled down about the lie when caught.

  • This looks a very cynical attempt to leverage PC language to manipulate people. Basically a social engineering attack. They surely will try to present it as pentest, but IMHO it should be treated as an attack.

  • I don't see any sense in which this is bullying.

    • I come to your car, cut your breaks, tell you just before you go on a ride, say it's just research and I will repair them. What would you call a person like that?

      1 reply →

>I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low.

I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations. My personal observation is that infosec/cybersecurity academia has been steadily moving to higher ethical standards in research. That doesn't mean that all academics follow this trend, but that unethical research is more likely to get your paper rejected from conferences.

Submitting bugs to an open source project is the sort of stunt hackers would have done in 1990 and then presented at a defcon talk.

  • > I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations.

    IEEE seems to have no problem with this paper though.

    >>> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

    from https://www-users.cs.umn.edu/~kjlu/

    • Section IV.A:

      > We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

      It seems that the research in this paper has been done properly.

      EDIT: since several comments come to the same point, I paste here an observation.

      They answer to these objections as well. Same section:

      > Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

      And, coming to ethics:

      > The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

      31 replies →

    • > IEEE seems to have no problem with this paper though.

      IEEE is just the publishing organisation and doesn't review research. That's handled by the program committee that each IEEE conference has. These committees consist of several dozen researchers from various institutions that review each paper submission. A typical paper is reviewed by 2-5 people and the idea is that these reviewers can catch ethical problems. As you may expect, there's wide variance in how well this works.

      While problematic research still slips through the cracks, the field as a whole is getting more sensitive to ethical issues. Part of the problem is that we don't yet have well-defined processes and expectations for how to deal with these issues. People often expect IRBs to make a judgement call on ethics but many (if not most) IRBs don't have computer scientists that are able to understand the nuances of a given research projects and are therefore ill-equipped to reason about the implications.

    • The IEEE Symposium on Security and Privacy should remove this paper at once for gross ethics violations. The message should be strong and unequivocal that this type of behavior is not tolerated.

Yup, it's basically stating the obvious: that any system based on an assumption of good faith is vulnerable to bad faith actors. The kernel devs are probably on the lookout for someone trying to introduce backdoors, but simply introducing a bug for the sake of introducing a bug (without knowing if it can be exploited), which is obviously much easier to do stealthily - why would anyone do that? Except for "academic research" of course...

  • > why would anyone do that?

    I can think of a whole lot of three letter agencies with reasons to do that, most of whom recruit directly from universities.

  • Academic research, cyberwarfare, a rival operating system architecture attempting to diminish the quality of an alternative to the system they're developing, the lulz of knowing one has damaged something... The reasons for bad-faith action are myriad, as diverse as human creativity.

  • In theory wouldn't it be possible to introduce bugs that are seemingly innocuous when reviewed independently but when combined form and exploit?

    Could a number of seemingly unrelated individuals introduce a number of bugs over time to form and exploit without being detected?

    • yes, of course, and I'm fairly certain it's happened before or at least there have been suspicions of it happening. Thats why trust is important, and why I'm glad kernel development is not very friendly.

      Doing code review at work I am constantly catching blatantly obvious security bugs. Most developers are so happy to get the thing to work, that they don't even consider security. This is in high level languages, with a fairly small team, only internal users, and pretty simple code base. I can't imagine trying to do it for something as high stakes and complicated as the kernel. Not to mention how subtle bugs can be in C. I suspect it is impossible to distinguish incompetence from malice. So aggressively weeding out incompetence, and then forming layers of trust is the only real defense.

    • Yes. binfmt and some other parts of systemd are such an example that introduce vulnerabilities that existed in windows 95. Not going into detail because it still needs to be fixed, assuming it was not intentional.

I believe this is violating research ethics hard, very hard. Reminds me if someone was aiming at researching childs' mental development through the study of inflicting mental damages. The subjects and the likely damages are not similar but the approach and mentality are inconveniently so.

  • yep first thing I thought was how did this get through the research ethics panel (all research at my University has to get approval).

    • What I don't understand is how this is ethical, but the sokol hoax was deemed unethical. I assume it's because I'm sokol's case, academia was humiliated, whereas here the target is outside academia

To me, this seems like a convoluted way to hide malicious actions as research, (not the other way around). This smells of intentional vulnerability introduction under the guise of academic investigation. There are millions of other, less critical, open source solutions this "research" could have tested on. I believe this was an intentional targeted attack, and it should be treated as such.

The "scientific" question answered by the mentioned paper is basically:

"Can open-source maintainers make a mistake by accepting faulty commits?"

In addition to being scummy, this research seems utterly pointless to me. Of course mistakes can happen, we are all humans, even the Linux maintainers.

This observation may very well get downvoted to oblivion: what UMN pulled is the Linux kernel development version of the Sokal Hoax.

Both are unethical, disruptive, and prove nothing about the integrity of the organizations they target.

  • Except for Linux actively running on 99% of all servers on the planet. Vulnerabilities in Linux can literally kill people, open holes for hackers, spies, etc.

    Submitting a fake paper to a journal read by a few dozen academics is a threat to someones ego. It is not in the same ballpark as a threat to IT infrastructure everywhere.

Agreed. Plus, I find the "oh, we didn't know what we were doing, you're not an inviting community" social engineering response, completely slimey and off-putting.

Technically analogous to pen testing except that it wasn’t done at the behest of the target, as legal pen testing is done. Hence it is indistinguishable from and must be considered, a malicious attack.

Unfortunately, we cannot be sure it is low for today's academia. So many people working there, with nothing useful to do other than flooding the conferences and journals with papers. They are desperate for anything that could be published. Plus, they know that the standards are low, because they see the other publications.

Devil's advocate, but why? How is this different from any other white/gray-hat pentest? They tried to submit buggy patches, once approved they immediately let the maintainers know not to merge them. Then they published a paper with their findings and which weak parts in the process they thing are responsible, and which steps they recommend be taken to mitigate this.

  • Very easy, if its not authorized it's not a pentest or red team operation.

    Any pentester or red team considers their profession an ethical one.

    By the response of the Linux Foundation, this is clearly not authorized nor falling into any bug bounty rules/framework they would offer. Social engineering attacks are often out of bounds for bug bounty - and even for authorized engagements need to follow strict rules and procedures.

    Wonder if there are even legal steps that could be taken by Linux foundation.

  • You can read the (relatively short) email chains for yourself, but to try and answer your question, as I understood it the problem wasn't entirely the problems submitted in the paper it was followup bad patches and ridiculous defense. Essentially they sent patches that were purportedly the result of static analysis but did nothing, broke social convention by failing to signal that the patch was the result of a tool, and it was deemed indistinguishable from more attempts to send bad code and perform tests on the linux maintainers.

There is no separate real world distinct from academia. Saying that scientists and researchers whose job it is to understand and improve the world are somehow becoming "increasingly disconnected from the real world" is a pretty cheap shot. Especially without any proof or even a suggestion of how you would quantify that.

how is this different than blackhats contributing to general awareness of web security practices? Opensource considered secure just because its up on github is no different than plaintext HTTP GET params being secure just because "who the hell will read your params in the browser", which would be still the status quo if some hackers hadn't done the "lowest of the low " and show the world this lesson.

LKML should consider not just banning the @umn.edu on the SMTP but sinkholing the whole of University of MN network address space. Demand a public apology and paying for compute for the next 3 years or get yeeted