Comment by EthanHeilman

4 years ago

>I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low.

I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations. My personal observation is that infosec/cybersecurity academia has been steadily moving to higher ethical standards in research. That doesn't mean that all academics follow this trend, but that unethical research is more likely to get your paper rejected from conferences.

Submitting bugs to an open source project is the sort of stunt hackers would have done in 1990 and then presented at a defcon talk.

> I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations.

IEEE seems to have no problem with this paper though.

>>> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

from https://www-users.cs.umn.edu/~kjlu/

  • Section IV.A:

    > We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

    It seems that the research in this paper has been done properly.

    EDIT: since several comments come to the same point, I paste here an observation.

    They answer to these objections as well. Same section:

    > Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

    And, coming to ethics:

    > The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

    • I'm surprised that the IRB determined this to be not human subjects research.

      When I fill out the NIH's "is this human research" tool with my understanding of what the study did, it tells me it IS human subjects research, and is not exempt. There was an interaction with humans for the collection of data (observation of behavior), and the subjects haven't prospectively agreed to the intervention, and none of the other very narrow exceptions apply.

      https://grants.nih.gov/policy/humansubjects/hs-decision.htm

    • > It seems that the research in this paper has been done properly.

      How is wasting the time of maintainers of one of the most popular open source project "done properly"?

      Also, someone correct me if I'm wrong, but I think if you do experiments that involve other humans, you need to have their consent _before_ starting the experiment, otherwise you're breaking a bunch of rules around ethics.

      11 replies →

    • Depends on your notion of "properly". IMO "ask for forgiveness instead of permission" is not an acceptable way to experiment on people. The "proper" way to do this would've been to request permission from the higher echelons of Linux devs beforehand, instead of blindly wasting the time of everyone involved just so you can write a research paper.

      1 reply →

    • This points to a serious disconnect between research communities and development communities.

      I would have reacted the same way Greg did - I don't care what credentials someone has or what their hidden purpose is, if you are intentionally submitting malicious code, I would ban you and shame you.

      If particular researchers continue to use methods like this, I think they will find their post-graduate careers limited by the reputation they're already establishing for themselves.

    • Saying something is ethical because a committee approved it is dangerously tautological (you can't justify any unethical behavior because someone at some time said it was ethical!).

      We can independently conclude this kind of research has put open source projects in danger by getting vulnerabilities that could carry serious real world consequences. I could imagine many other ways to carrying out this experiment without the consequences it appears to have had, like perhaps inviting developers to a private repository and keeping the patch from going public, or collaborating with maintainers to set up a more controlled experiment without risks.

      This seems by all appearances an unilateral and egoistic behavior without great thought into its real world consequences.

      Hopefully researchers learn from it and it doesn't discourage future ethical kernel research.

    • The goal of ethical research wouldn't be to protect the Linux kernel, it would be to protect the rights and wellbeing of the people being studied.

      Even if none of the patches made into the kernel (which doesn't seem to be true, according to other accounts), it's still possible to do permanent damage to the community of kernel maintainers.

    • Not really done properly: They were testing out the integrity of the system. This includes the process by which they notified the maintainers not to go ahead. What if that step had failed and the maintainers missed that message?

      Essentially, the researchers were not in control to stop the experiment if it deviated from expectations. They were relying on the exact system they were testing to trigger its halt.

      We also don't know what details they gave the IRB. They may have passed through due to IRB's naivete on this: It had a high human component because it was humans making many decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch. If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.

    • In my admittedly limited interaction with human subjects research approval, I would guess that this would not have been considered a proper setup. For one thing, there was no informed consent from any of the test subjects.

      1 reply →

    • In their "clarifications" [1], they say:

      "In the past several years, we devote most of our time to improving the Linux kernel, and we have found and fixed more than one thousand kernel bugs"

      But someone upthread posted that this group has a total of about 280 commits in the kernel tree. That doesn't seem like anywhere near enough to fix more than a thousand bugs.

      Also, the clarification then says:

      "the extensive bug finding and fixing experience also allowed us to observe issues with the patching process and motivated us to improve it"

      And the way you do that is to tell the Linux kernel maintainers about the issues you observed and discuss with them ways to fix them. But of course that's not at all what this group did. So no, I don't agree that this research was done "properly". It shouldn't have been done at all the way it was done.

      [1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

    • But still, this kind of research puts undue pressure on the kernel maintainers who have to review patches that were not submitted in good faith (where "good faith" = the author of the patch were trying to improve the kernel)

      2 replies →

    • "in all the three cases" is mildly interesting, as 232 commits have been reverted from these three actors. To my reading this means they either have a legitimate history of contributions with three red herrings, or they have a different understanding of the word "all" than I do.

  • > IEEE seems to have no problem with this paper though.

    IEEE is just the publishing organisation and doesn't review research. That's handled by the program committee that each IEEE conference has. These committees consist of several dozen researchers from various institutions that review each paper submission. A typical paper is reviewed by 2-5 people and the idea is that these reviewers can catch ethical problems. As you may expect, there's wide variance in how well this works.

    While problematic research still slips through the cracks, the field as a whole is getting more sensitive to ethical issues. Part of the problem is that we don't yet have well-defined processes and expectations for how to deal with these issues. People often expect IRBs to make a judgement call on ethics but many (if not most) IRBs don't have computer scientists that are able to understand the nuances of a given research projects and are therefore ill-equipped to reason about the implications.

  • The IEEE Symposium on Security and Privacy should remove this paper at once for gross ethics violations. The message should be strong and unequivocal that this type of behavior is not tolerated.

  • "To appear"

    • "To appear" has a technical meaning in academia, though—it doesn't mean "I hope"; it means "it's been formally accepted but hasn't actually been put in 'print' yet."

      That doesn't stop someone from lying about it, but it's not a casual claim, and doing so would probably bring community censure (as well as being easily falsifiable after time).

      7 replies →

    • I'm not holding my breath. I don't think they will pull that paper.

      Security research is not always the most ethical branch of computer science, to say it mildly. Those are the people selling exploits to oppressive regimes, allowing companies to sit on "responsibly reported" bugs for years while hand-wringing about "that wasn't in the attacker model, sorry our 'secure whatever' we sold is practically useless". Of course the overall community isn't like that, but the bad apples spoil the bunch. And the aforementioned unethical behaviour even seems widely accepted.

    • What are you trying to suggest? It's an accepted paper, the event just hasn't happened yet.