← Back to context

Comment by random5634

4 years ago

How does something like this get through IRB - I always felt IRB was over the top - and then they approve something like this?

UMN looks pretty shoddy - the response from the researcher saying these were automated by a tool looks like a potential lie.

They obtained an "IRB-exempt letter" because their IRB found that this was not human research. It's quite likely that the IRB made this finding based on a misrepresentation of the research during that initial stage; once they had an exemption letter the IRB wouldn't be looking any closer.

  • Not necessarily. And the conflation of IRB-exemption and not human subjects research is not exactly correct.[0]

    Each institution, and each IRB is made up of people and a set of policies. One does not have to meaningfully misrepresent things to IRBs for them to be misunderstood. Further, exempt from IRB review and 'not human subjects research' are not actually the same thing. I've run into this problem personally - IRB declines to review the research plan because it does not meet their definition of human subjects research, however the journal will not accept the article without IRB review. Catch-22.

    Further, research that involves deception is also considered a perfectly valid form of research in certain fields (e.g., Psychology). The IRB may not have responded simply because they see the complaint as invalid. Their mandate is protecting human beings from harm, not random individuals who email them from annoyance. They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous). Someone not liking a study is not research misconduct and if the IRB determined within their processes that it isn't even human subjects research, there isn't a lot they can do here.

    I suspect that this is just one of those disconnects that happens when people talk across disciplines. no misrepresentation was needed, all that was needed was for someone reviewing this, who's background is medicine and not CS, to not understand the organizational and human processes behind submitting a software 'patch'.

    The follow up behavior...not great...but the start of this could be a serious of individually rational actions that combine into something problematic because they were not holistically evaluated in context.

    [0] https://oprs.usc.edu/irb-review/types-of-irb-review/

    • Yes, your comment is the only one across the two threads which understands the nuance of the definition of human subjects research. This work is not "about" human subjects, and even the word "about" is interpreted a certain way in IRB review. If they interpret the research to be about software artifacts, and not human subjects, then the work is not under IRB purview (it can still be determined to be exempt, but that determination is from the IRB and not the PI).

      However, given that, my interpretation of the federal common rule is that this work would indeed fit the definition of human subjects research, as it comprises an intervention, and it is about generalizable human procedures, not the software artifact.

      2 replies →

    • > Further, research that involves deception is also considered a perfectly valid form of research in certain fields

      The type of deception that is allowable in such cases is lying to participants about what it is that is being studied, such as telling people that they are taking a knowledge quiz when you are actually testing their reaction time.

      Allowable deception does not include invading the space of people who did not consent to be studied under false pretenses.

    • > They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous).

      It sounds pretty callous if that jet engine gets mounted on a plane that carries humans. In this hypothetical the IRB absolutely should have a hand in stopping research that has a methodology that includes sabotaging a jet engine that could be installed on a passenger airplane.

      Waiving it off as an inanimate object doesn't feel like it captures the complete problem, given that there are many safety critical systems that can depend on the inanimate object.

      1 reply →

  • That's what it seemed like to me as well. Based on their research paper, they did not mention the individuals they interacted with at all.

    They also lied in the paper about their methodology - claiming that once their code was accepted, they told the maintainers it should not be included. In reality, several of their bad commits made it into the stable branch.

    • I don’t think that’s what’s happening here. The research paper you’re talking about was already published, and supposedly only consisted of 3 patches, not the 200 or so being reverted here.

      So it’s possible that this situation has nothing to do with that research, and is just another unethical thing that coincidentally comes from the same university. Or it really is a new study by the same people.

      Either way, I think we should get the facts straight before the wrong people are attacked.

    • > In reality, several of their bad commits made it into the stable branch.

      Is it known whether these commits were indeed bad? It is certainly worth removing them just in case, but is there any confirmation?

      4 replies →

  • My understanding is that it's pretty common for CS departments to get IRB exemption even when human participants are tangentially involved in studies.

    • It is also quite easy to pull the wool over an IRBs eyes. An IRB is usually staffed with a few people from the medicine, biology, psychology and maybe (for the good ethical looks) philosophy and theology departments. Usually they aren't really qualified to know what a computer scientist is talking about describing their research.

      And also, given that the stakes are higher e.g. in medicine, and the bar is lower in biology, one often gets a pass: "You don't want to poke anyone with needles, no LSD and no cages? Why are you asking us then?" Or something to that effect. The IRBs are just not used to such "harmless" things not being justified by the research objective.

      5 replies →

    • I've seen from a distance one CS department struggle with IRB to get approval for using Amazon Mechanical Turk to label pictures for computer vision datasets. I believe the resolution was creating a specialized approval process for that family of tasks.

> the response from the researcher saying these were automated by a tool looks like a potential lie.

To be clear, this is unethical research.

But I read the paper, and these patches were probably automatically generated by a tool (or perhaps guided by a tool, and filled in concretely by a human): their analyses boil down to a very simple LLVM pass that just checks for pointer dereferences and inserts calls to functions that are identified as performing frees/deallocations before those dereferences. Page 9 and onwards of the paper[1] explains it in reasonable detail.

[1]: https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...

  • Thanks for this, very helpful.

    Could they have submitted patches to fix the problems based on same tooling or was that not possible (I am not close to kernel development flow)?

    • > Could they have submitted patches to fix the problems based on same tooling or was that not possible (I am not close to kernel development flow)?

      Depends on what you mean: they knew exactly what they were patching, so they could easily have submitted inverse patches. On the other hand, the obverse research problem (patching existing UAFs rather than inserting new ones) is currently unsolved in the general case.

I have a feeling that methods of patching the Linux kernel is a concept most members of IRB boards wouldn't understand at all. It's pretty far outside their wheelhouse.

IRB is useless. They don't use much context, including if the speediness of IRB approval would save lives. You could make a reasonable argument that IRB has contributed to millions of preventable deaths at this point, with COV alone it's at least dozens of thousands if not far more.

  • This is the unfortunate attitude that leads to bad research and reduces trust in science. If you think IRB has contributed to deaths you should make a case, because right now you sound like a blowhard.

  • By COV do you mean Covid? It sounds like you're alluding to the argument that if they'd only let us test potential vaccines on humans right away then we would have had a vaccine faster. I disagree that that's a foregone conclusion, and you certainly need a strong argument or evidence to make such a claim.