Comment by thaeli
4 years ago
They obtained an "IRB-exempt letter" because their IRB found that this was not human research. It's quite likely that the IRB made this finding based on a misrepresentation of the research during that initial stage; once they had an exemption letter the IRB wouldn't be looking any closer.
Not necessarily. And the conflation of IRB-exemption and not human subjects research is not exactly correct.[0]
Each institution, and each IRB is made up of people and a set of policies. One does not have to meaningfully misrepresent things to IRBs for them to be misunderstood. Further, exempt from IRB review and 'not human subjects research' are not actually the same thing. I've run into this problem personally - IRB declines to review the research plan because it does not meet their definition of human subjects research, however the journal will not accept the article without IRB review. Catch-22.
Further, research that involves deception is also considered a perfectly valid form of research in certain fields (e.g., Psychology). The IRB may not have responded simply because they see the complaint as invalid. Their mandate is protecting human beings from harm, not random individuals who email them from annoyance. They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous). Someone not liking a study is not research misconduct and if the IRB determined within their processes that it isn't even human subjects research, there isn't a lot they can do here.
I suspect that this is just one of those disconnects that happens when people talk across disciplines. no misrepresentation was needed, all that was needed was for someone reviewing this, who's background is medicine and not CS, to not understand the organizational and human processes behind submitting a software 'patch'.
The follow up behavior...not great...but the start of this could be a serious of individually rational actions that combine into something problematic because they were not holistically evaluated in context.
[0] https://oprs.usc.edu/irb-review/types-of-irb-review/
Yes, your comment is the only one across the two threads which understands the nuance of the definition of human subjects research. This work is not "about" human subjects, and even the word "about" is interpreted a certain way in IRB review. If they interpret the research to be about software artifacts, and not human subjects, then the work is not under IRB purview (it can still be determined to be exempt, but that determination is from the IRB and not the PI).
However, given that, my interpretation of the federal common rule is that this work would indeed fit the definition of human subjects research, as it comprises an intervention, and it is about generalizable human procedures, not the software artifact.
Other note...different irbs treat not research vs exempt differently.
One institution I worked with conflated “exempt” and “not human subjects research” and required the same review of both.
Another institution separated them and would first establish if something was human subjects research. If it was, they would then review whether it was exempt from irb review based on certain categories. If they determined it was not human subjects research they would not review whether it met the exempt criteria, because in their mind they could not make such a determination for research that did not involve human subjects
I agree with your last paragraph, although I can totally understand how somebody who doesn’t know much about programming or open source would see otherwise.
> Further, research that involves deception is also considered a perfectly valid form of research in certain fields
The type of deception that is allowable in such cases is lying to participants about what it is that is being studied, such as telling people that they are taking a knowledge quiz when you are actually testing their reaction time.
Allowable deception does not include invading the space of people who did not consent to be studied under false pretenses.
> They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous).
It sounds pretty callous if that jet engine gets mounted on a plane that carries humans. In this hypothetical the IRB absolutely should have a hand in stopping research that has a methodology that includes sabotaging a jet engine that could be installed on a passenger airplane.
Waiving it off as an inanimate object doesn't feel like it captures the complete problem, given that there are many safety critical systems that can depend on the inanimate object.
Your extrapolation provides clear context about how this can harm people, which is within an irb purview and likely their ability to understand.
I’m not saying it is okay, I’m simply saying how this could happen.
It requires understanding the connection between inanimate object and personal harm, which in this case is 1)non obvious and 2)not even something I necessarily accept within a common rule definition of harm.
Annoyance or inconvenience is not a meaningful human harm within the irb framework
But, fundamentally, the irb did not see this as human research. You and I and the commenters see how that is wrong. That is where their evaluation ended...they did not see human involvement right or wrong.
And irb is part of the discussion of research ethics, it is not the beginning nor the end of doing ethical research.
Here is a case, where one university's (Portland State University) IRB saw that sending satire articles to social science journals "violated ethical guidelines on human-subjects research".
https://en.wikipedia.org/wiki/Peter_Boghossian#Research_misc...
that is actually a useful example for comparison.
* The researcher is a professor in the humanities, which typically does not deal with human subjects research and the (often) vague and confusing boundaries. Often, people from outside the social sciences and medical/biology fields struggle a little bit with IRBs because...things don't seem rational until you understand the history and details. Just like someone from CS.
* The researcher in your example DID NOT seek review by IRB (per my memory of the situation). That was the problem. The kernel bug authors seem to have engaged with their IRB. the difference is not doing it vs. a misunderstanding.
* The comments about seeking consent before submitting the fake papers ignore that it is perfectly possible to have done this WITHOUT a priori informed consent. It is perfectly possible for IRBs to review and approve studies involving deception. In those cases, informed consent is not required to collect data.
* Finally, people on IRBs tend to be academics and are highly likely to have some understanding of how a journal works. That would mean they understand the human role in journal publishing. The exact same IRB may well not have anyone with CS experience and may have looked at the kernel study and seen the human role differently than journal study.
* Lastly, the fact that the IRB in your example looked at 'animal rights' is telling. They were trying to figure out what Peter did. He published papers with data about experiments on animals...that would require IRB review. The fact that that charge was dismissed when they figured out no such experiments occurred is telling about who is acting in good faith.
My understanding in this case is not that the IRB declined to review the study plan, but that (quoting the study authors) "The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained)." (more information here: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....)
Do you think that the IRB was correct to make the determination they did? It does sound like a bit of a grey area
From the letter:
> The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning.
So the statement is a bit unclear to me, and I’m hesitant to come to a conclusion because I have not seen what they submitted.
As I read this they are saying:
* we explained the study to irb and asked whether it met their definition of human subjects research - based on our description they said it is not human subjects research
* therefore we did not apply to irb to have the study assessed for the appropriate type of review.
Exempt is a type of irb review, basically it is a low level desk review of a study. It does not mean no one looks at it, it just means the whole irb doesn’t have to discuss it.
I can see both sides of this. Irbs focus on protection of the rights of research participants. The assumption in cognitive models is of direct participants. This study ended up having indirect participants. I would argue that is the researchers job to clarify and ensure was reviewed. However, there is almost certainty this study would have been approved as exempt.
I think the irb likely did the right thing based on the information provided to them. The harm that HN is identifying does not fall within the normal irb definitions of harm anyways...which is direct harm to people. The causal chain HN is spun up about is very real...just not how irb views research typically
That's what it seemed like to me as well. Based on their research paper, they did not mention the individuals they interacted with at all.
They also lied in the paper about their methodology - claiming that once their code was accepted, they told the maintainers it should not be included. In reality, several of their bad commits made it into the stable branch.
I don’t think that’s what’s happening here. The research paper you’re talking about was already published, and supposedly only consisted of 3 patches, not the 200 or so being reverted here.
So it’s possible that this situation has nothing to do with that research, and is just another unethical thing that coincidentally comes from the same university. Or it really is a new study by the same people.
Either way, I think we should get the facts straight before the wrong people are attacked.
> In reality, several of their bad commits made it into the stable branch.
Is it known whether these commits were indeed bad? It is certainly worth removing them just in case, but is there any confirmation?
https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
2 replies →
I don't think we know if they contain bugs, but from what I gathered reading the mailing list, we do know that they added nothing of value.
My understanding is that it's pretty common for CS departments to get IRB exemption even when human participants are tangentially involved in studies.
It is also quite easy to pull the wool over an IRBs eyes. An IRB is usually staffed with a few people from the medicine, biology, psychology and maybe (for the good ethical looks) philosophy and theology departments. Usually they aren't really qualified to know what a computer scientist is talking about describing their research.
And also, given that the stakes are higher e.g. in medicine, and the bar is lower in biology, one often gets a pass: "You don't want to poke anyone with needles, no LSD and no cages? Why are you asking us then?" Or something to that effect. The IRBs are just not used to such "harmless" things not being justified by the research objective.
see my other comment to the GP. pulling the wool suggests agency and intentionality that isn't necessarily present when you have disciplinary differences like you describe. Simple miscommunication, e.g., using totally normal field terminology that does not translate well, is different.
4 replies →
I've seen from a distance one CS department struggle with IRB to get approval for using Amazon Mechanical Turk to label pictures for computer vision datasets. I believe the resolution was creating a specialized approval process for that family of tasks.
That sounds like a disconnect from reality.
I think it is because many labs in CS departments do very little research involving human subjects (e.g. a machine learning lab or a theory lab), so within those labs there isn't really an expectation that everything goes through IRB. Many CS graduate students likely never have to interact with IRB at all, so they probably don't even know when it is necessary to involve IRB. The rules for what requires IRB involvement are also somewhat open to interpretation. For example, surveys are often exempt depending on what the survey is asking about.
1 reply →