Comment by MaxBarraclough
4 years ago
Perhaps I'm missing something obvious, but what's the point of all this subterfuge in the first place? Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected? What does it matter whether the contributor knew ahead of time that they were submitting insecure code?
It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused. There's no point doing this, you can just search Wikipedia's edits for corrections, and start your analysis from there.
> What does it matter whether the contributor knew ahead of time that they were submitting insecure code?
It's a specific threat model they were exploring: a malicious actor introducing vulnerability on purpose.
> Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected?
Perhaps they could. I guess it'd involve much more work, and could've yielded zero results - after all, I don't think there are any documented examples when a vulnerability was proven to have been introduced on purpose.
> what's the point of all this subterfuge in the first place?
Control over the experimental setup, which is important for validity of research. Notice how most research involves gathering up fresh subjects and controls - scientists don't chase around the world looking for people or objects that, by chance, already did the things they're testing for. They want fresh subjects to better account for possible confounders, and hopefully make the experiment reproducible.
(Similarly, when chasing software bugs, you could analyze old crash dumps all day to try and identify a bug - and you may start with that - but you always want to eventually reproduce the bug yourself. Ultimately, "I can and did that" is always better than "looking at past data, I guess it could happen".)
> It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused.
Honestly, I wouldn't object to that experiment either. It wouldn't do much harm (little additional vandalism doesn't matter on the margin, the base rate is already absurd), and could yield some social good. Part of the reason to have public research institutions is to allow researchers to do things that would be considered bad if done by random individual.
Also note that both Wikipedia and Linux kernel are essentially infrastructure now. Running research like this against them makes sense, where running the same research against a random small site / OSS project wouldn't.
> It's a specific threat model they were exploring: a malicious actor introducing vulnerability on purpose.
But does that matter? We can imagine that the error-prone developer who submitted the buggy patch just had a different mindset. Nothing about the patch changes. In fact, a malicious actor is explicitly trying to act like an error-prone developer and would (if skilled) be indistinguishable from one. So we'd expect the maintainer response to be the same.
> I guess it'd involve much more work, and could've yielded zero results - after all, I don't think there are any documented examples when a vulnerability was proven to have been introduced on purpose.
In line with UncleMeat's comment, I'm not convinced it's of any consequence that the security flaw was introduced deliberately, rather than by accident.
> scientists don't chase around the world looking for people or objects that, by chance, already did the things they're testing for
That doesn't sound like a fair description of what's happening here.
There are two things at play. Firstly, an analysis of the survival function [0] associated with security vulnerabilities in the kernel. Secondly, the ability of malicious developers to deliberately introduce new vulnerabilities. (The technical specifics detailed in the paper are not relevant to our discussion.)
I'm not convinced that this unethical study demonstrates anything of interest on either point. We already know that security vulnerabilities make their way into the kernel. We already know that malicious actors can write code with intentional vulnerabilities, and that it's possible to conceal these vulnerabilities quite effectively.
> Honestly, I wouldn't object to that experiment either. It wouldn't do much harm (little additional vandalism doesn't matter on the margin, the base rate is already absurd), and could yield some social good.
That's like saying It's ok to deface library books, provided it's a large library, and provided other people are also defacing them.
Also, it would not yield a social good. As I already said, it's possible to study Wikipedia's ability to repair vandalism, without committing vandalism. This isn't hypothetical, it's something various researchers have done. [0][1]
> Part of the reason to have public research institutions is to allow researchers to do things that would be considered bad if done by random individual.
It isn't. Universities have ethics boards. They are held to a higher ethical standard, not a lower one.
> Running research like this against them makes sense
No one is contesting that Wikipedia is worthy of study.
[0] https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2...
[1] https://en.wikipedia.org/wiki/Wikipedia:Counter-Vandalism_Un...
It potentially has long term negative impact on the experimental subjects involved and has no research benefit. The researchers should be removed from university and the university itself should be sued and lose enough money that they act more responsible in the future. It’s a very slippery slope to from casual irb wavers to Tuskegee experiments.
Ah, but youre missing the fact that discovered vulnerabilities are now trophies in the security industry. This is potentially gold in your CV.
Of note here: Wikipedia has a specific policy prohibiting this sort of experimentation. https://en.wikipedia.org/w/index.php?title=Wikipedia:NOTLAB