Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
The whole idea of the mailing list based submission process is that it allows others on the list to review your patch sets and point out obvious problems with your changes and discuss them, before the maintainer picks the patches up from the list (if they don't see any problem either).
As I pointed out elsewhere, there are already test farms and static analysis tools in place. On some MLs you might occasionally see auto generated mails that your patch set does not compile under configuration such-and-such, or that the static analysis bot found an issue. This is already a thing.
What happened here is basically a con in the patch review process. IRL con men can scam their marks, because most people assume, when they leave the house that the majority of the others outside aren't out there to get them. Except when they run into one where the assumption doesn't hold and end up parted from their money.
For the paper, bypassing the review step worked in some instances of the many patches they submitted because a) humans aren't perfect, b) have a mindset that most of the time, most people submitting bug fixes do so in good faith.
Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?
Yes, I know that this review process isn't perfect, that there are problems and I'm not trying to dismiss any concerns.
But what technical measure would you propose that can effectively stop con men?
> Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
Yes, especially for critical projects?
> Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?
I don’t jump to the conclusion that the random contributor is evil. I do however think about the potential impact of the submitted patch, security or not, and I do assume a random contributor can sneak in subtle bugs, usually not intentionally, but simply due to a lack of understanding.
> > Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
>> Yes, especially for critical projects?
People don't act that way I described intentionally, or because they are dumb.
Even if you go in with the greatest paranoia and the best of intentions, most of the time, most of the other people don't act maliciously and your paranoia eventually returns to a reasonable level (i.e. assuming that most people might not be malicious, but also not infallible).
It's a kind of fatigue. It's simply human. No matter how often you say "DUH of course they should".
In my entire life, I have only met a single guy who managed to keep that "everybody else is potentially evil" attitude up over time. IIRC he was eventually prescribed something with Lithium salts in it.
Force the university to take reponsibility for screening their researchers. i.e. a blanket ban, scorched earth approach punishing the entire university's reputation is a good start.
People want to claim these are lone rogue researchers and good people at the university shouldn't be punished, but this is the only way you can reign in these types of rogues individuals: by getting the collective reputation of the whole university on the line to police their own people. Every action of individual researchers must be assumed to be putting the reputation of the university as a whole on the line. This is the cost of letting individuals operate within the sphere of the university.
Harsh, "over reaction" punishment is the only solution.
The only real fix for this is to improve tooling and/or programming language design to make these kinds of exploits more difficult to slip past maintainers. Lots of folks are working in that space (see recent discussion around Rust), but it's only becoming a priority now that we're seeing the impact of decades of zero consideration for security. It'll take a while to steer this ship into the right direction, and in the meantime the world continues to turn.
Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
The whole idea of the mailing list based submission process is that it allows others on the list to review your patch sets and point out obvious problems with your changes and discuss them, before the maintainer picks the patches up from the list (if they don't see any problem either).
As I pointed out elsewhere, there are already test farms and static analysis tools in place. On some MLs you might occasionally see auto generated mails that your patch set does not compile under configuration such-and-such, or that the static analysis bot found an issue. This is already a thing.
What happened here is basically a con in the patch review process. IRL con men can scam their marks, because most people assume, when they leave the house that the majority of the others outside aren't out there to get them. Except when they run into one where the assumption doesn't hold and end up parted from their money.
For the paper, bypassing the review step worked in some instances of the many patches they submitted because a) humans aren't perfect, b) have a mindset that most of the time, most people submitting bug fixes do so in good faith.
Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?
Yes, I know that this review process isn't perfect, that there are problems and I'm not trying to dismiss any concerns.
But what technical measure would you propose that can effectively stop con men?
> Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
Yes, especially for critical projects?
> Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?
I don’t jump to the conclusion that the random contributor is evil. I do however think about the potential impact of the submitted patch, security or not, and I do assume a random contributor can sneak in subtle bugs, usually not intentionally, but simply due to a lack of understanding.
> > Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
>> Yes, especially for critical projects?
People don't act that way I described intentionally, or because they are dumb.
Even if you go in with the greatest paranoia and the best of intentions, most of the time, most of the other people don't act maliciously and your paranoia eventually returns to a reasonable level (i.e. assuming that most people might not be malicious, but also not infallible).
It's a kind of fatigue. It's simply human. No matter how often you say "DUH of course they should".
In my entire life, I have only met a single guy who managed to keep that "everybody else is potentially evil" attitude up over time. IIRC he was eventually prescribed something with Lithium salts in it.
> Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
I’m not a maintainer but naively I would have thought that the answer to this is “Yes”.
I didn’t mean any disrespect. I didn’t write “I can’t believe they haven’t implemented a perfect technical process that fully prevents these attacks”.
I just asked if there are any ideas being discussed.
Two things can be true at the same time: 1. What the “researchers” did was unethical. 2. They uncovered security flaws.
> Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?
Do the game theory. If you do assume that, you'll always be wrong. But if you don't assume it, you won't always be right.
Force the university to take reponsibility for screening their researchers. i.e. a blanket ban, scorched earth approach punishing the entire university's reputation is a good start.
People want to claim these are lone rogue researchers and good people at the university shouldn't be punished, but this is the only way you can reign in these types of rogues individuals: by getting the collective reputation of the whole university on the line to police their own people. Every action of individual researchers must be assumed to be putting the reputation of the university as a whole on the line. This is the cost of letting individuals operate within the sphere of the university.
Harsh, "over reaction" punishment is the only solution.
The only real fix for this is to improve tooling and/or programming language design to make these kinds of exploits more difficult to slip past maintainers. Lots of folks are working in that space (see recent discussion around Rust), but it's only becoming a priority now that we're seeing the impact of decades of zero consideration for security. It'll take a while to steer this ship into the right direction, and in the meantime the world continues to turn.
The University and researchers involved are now default-banned from submitting.
So yes.