Comment by incrudible
4 years ago
I completely disagree with this framing.
A real malicious actor is going to be planted in some reputable institution, creating errors that look like honest mistakes.
How do you test if the process catches such vulnerabilities? You do it the just the way that these researchers did.
Yes, it creates extra homework for some people with certain responsibilities, that doesn't mean it's unethical. Don't shoot the messenger.
> A real malicious actor
They introduced a real vulnerability in a codebase that lowers world-wide cybersecurity used by billions so they could jerk themselves off over a research paper.
They are a real malicious actor and I hope they hit by the CFAA.
There is a specific subsection of the CFAA that applies to this situation (deployment of unauthorized code that makes its way into non consenting systems).
This was a bold and unwise exercise, especially if you’re an academic in country on a revocable visa who participated.
Others would call it stupid to submit the patches and it would be fine if there were further consequences to deter others.
1 reply →
No. There are processes to do such sorts of penetration testing. Randomly sending buggy commits or commits with security vulns to "test the process" is extremely unethical. The linux kernel team are not lab rats.
It's not simply unethical, it's a national security risk. Is there a proof that the Chinese government was not sponsoring this ,,research '' for example?
Linux kernel vulnerabilities affect the entire world. The world does not revolve around the U.S., and I find it extremely unlikely a university professor in the U.S. doing research for a paper did this on behalf of the Chinese government.
It's far more likely that professor is so out of touch that they honestly think their behavior is acceptable.
7 replies →
If that's the case, why would they publish a paper and announce their "research" to the world?
> There are processes to do such sorts of penetration testing.
What's the process then? I doubt there is such a process for the Linux kernel, otherwise the response would've been "you did not follow the process" instead of "we don't like what you did there".
Well, if there's no process, then it's not ethical (and sometimes, not legal) to purposefully introduce bad commits or do things like that. You need consent.
Firstly, it accomplishes nothing. We already all know that PRs and code submission is a potential vector for buggy code or security vulnerabilities. This is like saying water is wet.
Secondly, it wastes the time of the people working on the linux kernel and ruins the trust of code coming from the university of minnesota.
All of this happened due to caring about one's own research more than the ethics of doing this sort of thing. And continuing to engage in this behavior after receiving a warning.
7 replies →
This would absolutely be true if this were an authorised penetration test, however it was unauthorised and therefore unethical.
How exactly do you "authorize" these tests? Giving advance notice would defeat the purpose, obviously.
"We're writing research on the security systems involved around the Linux kernel, would it be acceptable to submit a set of patches to be reviewed for security concerns just as if it was a regular patch to the Linux kernel?"
This is what you do as a grownup and the other side is expected to honor your request and perform the same thing they do for other commits... the problem is that people think of pen testing as an adversarial relationship where one person needs to win over the other one.
12 replies →
Perhaps the research just simply shouldn't be done. What are the benefits of this research? Does it outweigh the costs?
5 replies →
These are real malicious actors.
You don't know that, but that's also irrelevant. There's always plausible deniability with such bugs. The point is that you need to catch the errors no matter where they come from, because you can't trust anyone.
Carrying out an attack for personal gain is malicious. It doesn't matter if the payload is for crypto mining, creating a backdoor for the NSA, or a vulnerability you can cite in a paper.
Pentesting unwitting participants is malicious, and in many cases illegal.
But that's the point, you're a security researcher wanting to get the honors of getting a PhD, not a petty criminal, so you're supposed to have a strong ethical background.
A security researcher doesn't just delete a whole hard drive's worth of data to prove they have the rights to delete things, they are trusted for this reason.
It is ironic that you introduce plausible deniability here. No one as concerned about security as you profess to be should consider the presence of plausible deniability as being grounds for terminating a threat analysis. In the real world, where we cannot be sure of catching every error, identifying actual threats, and their capabilities and methods, is a security-enhancing analysis.
It is unethical. You cannot experiment on people without their consent. Their own university has explicit rules against this.