← Back to context

Comment by waihtis

4 years ago

Go hack a random organization without a vulnerability disclosure program in place and see how much goodwill you have. There is a very established best practice in how to do responsible disclosure and this is far from it.

Also by and large reputation is a good first step in a security process.

While any USB stick might have malware on it if it's ever been out of your sight, that one you found in the parking lot is a much bigger problem.

Propose a way to test this without invalidating the results.

  • 1) Contact a single maintainer and explore feasibility of the study 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it 3) Orchestrate it so that someone outside of the knowledge group approves one or more of these patches 4) Interfere before any further damage is done

    Besides, are you arguing that ends justify the means if the intent behind the research is valid?

    • Perhaps I'm missing something obvious, but what's the point of all this subterfuge in the first place? Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected? What does it matter whether the contributor knew ahead of time that they were submitting insecure code?

      It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused. There's no point doing this, you can just search Wikipedia's edits for corrections, and start your analysis from there.

      6 replies →

    • > 3) Orchestrate it so that someone outside of the knowledge group approves one or more of these patches

      Isn't this part still experimenting on people without their consent? Why does one group of maintainers get to decide that you can experiment on another group?

      2 replies →

    • It depends.

      Does creating a vaccine justify the death of some lab animals? Probably.

      Does creating supermen justify mutilating people physically and psychologically without their consent? Hell no.

      You can’t just ignore the context.

    • > 1) Contact a single maintainer and explore feasibility of the study

      That has the risk that the contacted maintainer is later accused of collaborating with saboteurs or that they consult others. Either very awful or possibly invalidates results.

      > 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it

      Assuming the leadership agrees and won't break confidentiality, which they might if the results could make them look bad. Results would be untrustworthy or potentially increase complacency.

      > 4) Interfere before any further damage is done

      That was done, was it not?

      > Besides, are you arguing that ends justify the means if the intent behind the research is valid?

      Linux users are lucky they got off this easy.

      2 replies →

  • In every commercial pentest I have been in, you have 1-2 usually senior employees on the blue team in the know. They have the job to stop employees from going to far on defense, as well as stop the pentesters from going too far. The rest of the team stays in the dark to test their response and observation.

    In this case, in my opinion, a small set of maintainers and linus as "management" would have to be in the know to e.g. stop a merge of such a patch once it was accepted by someone in the dark.

  • There doesn't have to be a way.

    Kernel maintainers are volunteering their time and effort to make Linux better, not to be entertaining test subjects for the researchers.

    Even if there is no ethical violation, they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.

    • > There doesn't have to be a way.

      Given the importance of the Linux kernel, there has to be a way to make contributions safer. Some people even compare it to the "water supply" and others bring in "national security".

      > they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.

      "Oh no, think of the effort we have to spend at defending a critical piece of software!"

      1 reply →

  • If you can’t make an experiment without violating ethical standards, you simply don’t do it, you can’t use this as an excuse to violate ethical standards.

    • Misplaced trust was broken, that's it. Linux users are incredibly lucky this was a research group and not an APT.