Comment by nullc
4 years ago
CS department security research is near universally not held to be in the scope of IRBs. This isn't entirely bad: the IRB process that projects are subjected to is so broken that it would be a sin to bring that mess on any other things.
But it means the regularly 'security' research does ethically questionable stuff.
IRBs exist because of legal risk. If parties harmed by unethical computer science research do not litigate (or bring criminal complaints, as applicable) the university practices will not substantially change.
Security research has its own standards of ethics, and these researchers violated those standards.
1. You don't conduct a penetration test without permission to do so, or without rules of engagement laying out what kinds of actions and targets are permitted. The researchers did not seek permission or request RoE; they tried to ask forgiveness instead.
2. You disclose the vulnerabilities immediately to the software's developers, and wait a certain period before revealing the vulns to the public. While the researchers did immediately notify the kernel dev team in 3 cases, there's apparently another vulnerable commit that the researchers didn't mention in their paper and did not tell the kernel dev team about, which was still in the kernel as of the paper's publish date.
Apparently the IRB team that reviewed this project decided that no permission was needed because the experiment was on software, not people--even though the whole thing hinged on human code review practices. It's evident that the IRB doesn't know how infosec research should be conducted, how software is developed, or how code review works, but it's also evident that the researchers themselves either didn't know or didn't care about best practices in infosec.