Comment by bezout
4 years ago
“Hey, we are going to submit some patches that contain vulnerabilities. All right?”
If they do so, the maintainers become more vigilant and the experiment fails. But, the key to the experiment is that maintainers are not vigilant as they should be. It’s not an attack to the maintainers though, but to the process.
In penetration testing you are doing the same thing, but you get the go-ahead for someone responsible for the project or organization since they are interested in the results as well.
A red team without approval is just a group of criminals. They must have been able to find active projects with a centralized leadership they could ask for permission.
I don’t know much about penetration testing so excuse me for the dumb question: are you required to disclose the exact methods that you’re going to use?
Yes. You have agreements about what is fair game and what is off limits. It can be that nothing can be physically altered, what times of day or office locations are OK, if it should only be a test against web services or anything in between.
2 replies →
It depends on the organization. Most that I've worked with have said everything is fine except for social engineering, but some want to know every tool you'll be running, and every type of vulnerability you'll try to exploit.
2 replies →
What you do during pentesting is against the law, if you do not discuss this with your client. You're trying to gain access to a computer system that you should have no access to. The only reason this is OK, is that you have prior permission from the client to try these methods. Thus, it is important to discuss the methods used when you are executing a pentest.
With every pentesting engagement I've had, there always were rules of engagement, and what kind of things you are and are not allowed to do. They even depend on what kind of test you are doing. (for example: if you're testing bank software, it matters a lot if you test against their production environment or their testing environment)
usually the discussion is around the end goals, rather than the means. But both are game for discussion.
If the attack surface is large enough and the duration of the experiment long enough it'll return to baseline soon enough I think. It's a reasonable enough compromise. After all if the maintainers are not already considering that they might be under attack I'd argue that something is wrong with the system, a zero-day in the kernel would be invaluable indeed.
And well, if the maintainers become more vigilant in the long run it's a win/win in my book.
The maintainers are the process, as they are reviewing it, so it's absoutely attacking the maintainers.
"We're going to, as part of a study, submit various patches to the kernel and observe the mailing list and the behavior of people in response to these patches, in case a patch is to be reverted as part of the study, we immediately inform the maintainer."
Your message would push maintainers to put even more focus on the patches, thus invalidating the experiment.
>Your message would push maintainers to put even more focus on the patches, thus invalidating the experiment.
The Tuskegee Study wouldn't have happened if its participants were voluntarily, and it's effects still haunt the scientific community today. The attitude of "science by any means, including by harming other people" is reprehensible and has lasting consequences for the entire scientific community.
However, unlike the Tuskegee Study, it's totally possible to have done this ethically by contacting the leadership of the Linux project and having them announce to maintainers that anonymous researchers may experiment with the contribution process, and allowing them to opt out if they do not consent, and to ensure that harmful commits never reach stable from these researchers.
The researchers chose to instead lie to the Linux project and introduce vulnerabilities to stable trees, and this is why their research is particularly deplorable - their ethical transgressions and possibly lies made to their IRB were not done out of any necessity for empirical integrity, but rather seemingly out of convenience or recklessness.
And now the next group of researchers will have a harder time as they may be banned and every maintainer now more closely monitors academics investigating open source security :)
2 replies →
But it wouldn't let maintainers know what is happening, it only informs them that someone will be submitting some patches, some of which might not be merged. It doesn't push people into vigilance onto a specific detail of the patch and doesn't alert them that there is something specific. If you account for that in your experiment priors, that is entirely fine.