← Back to context

Comment by Avamander

4 years ago

Propose a way to test this without invalidating the results.

1) Contact a single maintainer and explore feasibility of the study 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it 3) Orchestrate it so that someone outside of the knowledge group approves one or more of these patches 4) Interfere before any further damage is done

Besides, are you arguing that ends justify the means if the intent behind the research is valid?

  • Perhaps I'm missing something obvious, but what's the point of all this subterfuge in the first place? Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected? What does it matter whether the contributor knew ahead of time that they were submitting insecure code?

    It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused. There's no point doing this, you can just search Wikipedia's edits for corrections, and start your analysis from there.

    • > What does it matter whether the contributor knew ahead of time that they were submitting insecure code?

      It's a specific threat model they were exploring: a malicious actor introducing vulnerability on purpose.

      > Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected?

      Perhaps they could. I guess it'd involve much more work, and could've yielded zero results - after all, I don't think there are any documented examples when a vulnerability was proven to have been introduced on purpose.

      > what's the point of all this subterfuge in the first place?

      Control over the experimental setup, which is important for validity of research. Notice how most research involves gathering up fresh subjects and controls - scientists don't chase around the world looking for people or objects that, by chance, already did the things they're testing for. They want fresh subjects to better account for possible confounders, and hopefully make the experiment reproducible.

      (Similarly, when chasing software bugs, you could analyze old crash dumps all day to try and identify a bug - and you may start with that - but you always want to eventually reproduce the bug yourself. Ultimately, "I can and did that" is always better than "looking at past data, I guess it could happen".)

      > It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused.

      Honestly, I wouldn't object to that experiment either. It wouldn't do much harm (little additional vandalism doesn't matter on the margin, the base rate is already absurd), and could yield some social good. Part of the reason to have public research institutions is to allow researchers to do things that would be considered bad if done by random individual.

      Also note that both Wikipedia and Linux kernel are essentially infrastructure now. Running research like this against them makes sense, where running the same research against a random small site / OSS project wouldn't.

      2 replies →

    • It potentially has long term negative impact on the experimental subjects involved and has no research benefit. The researchers should be removed from university and the university itself should be sued and lose enough money that they act more responsible in the future. It’s a very slippery slope to from casual irb wavers to Tuskegee experiments.

    • Ah, but youre missing the fact that discovered vulnerabilities are now trophies in the security industry. This is potentially gold in your CV.

  • > 3) Orchestrate it so that someone outside of the knowledge group approves one or more of these patches

    Isn't this part still experimenting on people without their consent? Why does one group of maintainers get to decide that you can experiment on another group?

    • It is, but that is how security testing goes about in general (in the commercial world.) Of its application to research and ethics, I’m not much of an authority.

    • In general you try to obtain consent from their boss, so that if the people you pentested on complain you can point to their boss and say "Hey they agreed to it" and that will be the end of the story. In this case it's not clear who the "boss" is but something like the Linux Foundation would be a good start.

  • It depends.

    Does creating a vaccine justify the death of some lab animals? Probably.

    Does creating supermen justify mutilating people physically and psychologically without their consent? Hell no.

    You can’t just ignore the context.

  • > 1) Contact a single maintainer and explore feasibility of the study

    That has the risk that the contacted maintainer is later accused of collaborating with saboteurs or that they consult others. Either very awful or possibly invalidates results.

    > 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it

    Assuming the leadership agrees and won't break confidentiality, which they might if the results could make them look bad. Results would be untrustworthy or potentially increase complacency.

    > 4) Interfere before any further damage is done

    That was done, was it not?

    > Besides, are you arguing that ends justify the means if the intent behind the research is valid?

    Linux users are lucky they got off this easy.

    • > That was done, was it not?

      The allegation being made on the mailing list is that some incorrect patches of theirs made it into git and even the stable trees. As there is not presently an enumeration of them, or which ones are alleged to be incorrect, I cannot state whether this is true.

      But that's the claim.

      edit: And looking at [1], they have a bunch of relatively tiny patches to a lot of subsystems, so depending on how narrowly gregkh means "rip it all out", this may be a big diff.

      edit 2: On rereading [2], I may have been incorrectly conflating the assertion about "patches containing deliberate bugs" with "patches that have been committed". Though if they're ripping everything out anyway, it appears they aren't drawing a distinction either...

      [1] - https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

      [2] - https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...

      1 reply →

In every commercial pentest I have been in, you have 1-2 usually senior employees on the blue team in the know. They have the job to stop employees from going to far on defense, as well as stop the pentesters from going too far. The rest of the team stays in the dark to test their response and observation.

In this case, in my opinion, a small set of maintainers and linus as "management" would have to be in the know to e.g. stop a merge of such a patch once it was accepted by someone in the dark.

There doesn't have to be a way.

Kernel maintainers are volunteering their time and effort to make Linux better, not to be entertaining test subjects for the researchers.

Even if there is no ethical violation, they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.

  • > There doesn't have to be a way.

    Given the importance of the Linux kernel, there has to be a way to make contributions safer. Some people even compare it to the "water supply" and others bring in "national security".

    > they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.

    "Oh no, think of the effort we have to spend at defending a critical piece of software!"

If you can’t make an experiment without violating ethical standards, you simply don’t do it, you can’t use this as an excuse to violate ethical standards.

  • Misplaced trust was broken, that's it. Linux users are incredibly lucky this was a research group and not an APT.