Comment by rzwitserloot
4 years ago
The professor gets exactly what they want here, no?
"We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".
That'll be a fun paper to write, no doubt.
Additional context:
* One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith[1].
Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits [2].
The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.
As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?
* Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. [3]
[1]: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2
[2]: https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
[3]: https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...
Thanks for the support.
I also now have submitted a patch series that reverts the majority of all of their contributions so that we can go and properly review them at a later point in time: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
Just wanted to say thanks for your work!
As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".
Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.
So thanks. Keep it up.
How could resilience be verified after asking for consent?
6 replies →
A lot of people are talking about the ethical aspects, but could you talk about the security implications of this attack?
From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have already reached the stable trees.
Apologies in advance if my questions are off the mark, but what does this mean in practice?
1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?
2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?
3. Will there be a post-mortem for this attack/attempted attack?
I don't think the attack described in the paper actually succeeded at all, and in fact the paper doesn't seem to claim that it did.
Specifically, I think the three malicious patches described in the paper are:
- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.
- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.
- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.
This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.
edit: wording tweak for clarity
16 replies →
I wonder about this me too.
To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let's say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.
Maybe they can pose as 10 different people, in case some of them gets banned.
7 replies →
I have the same questions. So far we have focused on how bad these "guys" are. Sure, they could have done it differently, etc. However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.
How to solve this "issue" without putting too much process around it? That's the challenge.
39 replies →
What would be the security implications of these things:
* a black hat writes malware that proves to be capable of taking out a nation's electrical grid. We know that such malware is feasible.
* a group of teenagers is observed to drop heavy stones from a bridge onto a motorway.
* another teenager pointing a relatively powerful laser at the cockpit of a passenger jet which is about to land at night.
* an organic chemist is demonstrating that you can poison 100,000 people by throwing certain chemicals into a drinking water reservoir.
* a secret service subverting software of a big industrial automation company in order to destroy uranium enrichment plants in another country.
* somebody hacking a car's control software in order to kill its driver
What are the security implications of this? That more money should be spent on security? That we should stop to drive on motorways? That we should spent more money on war gear? Are you aware how vulnerable all modern infrastructure is?
And would demonstrating that any of these can practically be done be worth an academic paper? Aren't several of these really a kind of military research?
The Linux kernel community does spend a lot of effort on security and correctness of the kernel. They have a policy of maximum transparency which is good, and known to enhance security. But their project is neither a lab in order to experiment with humans, nor a computer war game. I guess if companies want to have even more security, for running things like nuclear power plants or trains on Linux, they should pay for the (legally required) audits by experts.
I agree with the sentiment. For a project of this magnitude maybe it comes to develop some kind of static analysis along with refactoring the code to make the former possible.
As per the attack surface described in the paper (section IV). Because (III, the acceptance process) is a manpower issue.
3 replies →
As an Alumni of the University of Minnesota's program I am appalled this was even greenlit. It reflects poorly on all graduates of the program, even those uninvolved. I am planning to email the department head with my disapproval as an alumni, and I am deeply sorry for the harm this caused.
I am wondering if UMN will now get a bad name in Open Source and any contribution with their email will require extra care.
And if this escalate to MSM Media it might also damage future employment status from UMN CS students.
Edit: Looks like they made a statement. https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...
39 replies →
Based on my time in a university department you might want to cc whoever chairs the IRB or at least oversees its decisions for the CS department. Seems like multiple incentives and controls failed here, good on you for applying the leverage available to you.
13 replies →
>It reflects poorly on all graduates of the program
how it does?
I hope they take this bad publicity and stop (rather than escalating stupidity by using non university emails).
What a joke - not sure how they can rationalize this as valuable behavior.
It was a real world penetration test that showed some serious security holes in the code analysis/review process. Penetration tests are always only as valuable as your response to them. If they chose to do nothing about their code review/analysis process, with these vulnerabilities that made it in (intentional or not), then yes, the exercise probably wasn't valuable.
Personally, I think all contributors should be considered "bad actors" in open source software. NSA, some university mail address, etc. I consider myself a bad actor, whenever I write code with security in mind. This is why I use fuzzing and code analysis tools.
Banning them was probably the correct action, but not finding value requires intentionally ignoring the very real result of the exercise.
17 replies →
I would implore you to maintain the ban, no matter how hard the university tries to make ammends. You sent a very clear message that this type of behavior will not be tolerated, and organizations should take serious measures to prevent malicious activities taking place under their purview. I commend you for that. Thanks for your hard work and diligence.
I'd disagree. Organizations are collections of actors, some of which may have malicious intents. As long as the organization itself does not condone this type of behavior, has mechanisms in place to prevent such behavior, and has actual consequences for malicious actors, then the blame should be placed on the individual, not the organization.
In the case of research, universities are required to have an ethics board that reviews research proposals before actual research is conducted. Conducting research without an approval or misrepresenting the research project to the ethics board are pretty serious offenses.
Typically for research that involves people, participants in the research require having a consent form that is signed by participants, alongside a reminder for participants that they can withdraw that consent at any time without any penalties. It's pretty interesting that in this case, there seemed to have been no real consent required, and it would be interesting to know whether there has been an oversight by the ethics board or a misrepresentation of the research by the researchers.
It will be interesting to see whether the university applies a penalty to the professor (removal of tenure, termination, suspension, etc.) or not. The latter would imply that they're okay with unethical or misrepresented research being associated with their university, which would be pretty surprising.
In any case, it's a good thing that the Linux kernel maintainers decided that experimenting on them isn't acceptable and disrespectful of their contributions. Subjecting participants to experiments without their consent is a severe breach of ethical duty, and I hope that the university will apply the correct sanctions to the researchers and instigators.
9 replies →
Looks like the authors have Chinese names [1]. Should they ban anyone with Chinese names, too, for good measure? Or maybe collective punishment is not such a good idea?
[1] https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
I have to ask: were they not properly reviewed when they were first merged?
Also to assume _all_ commits made by UMN, beyond what's been disclosed in the paper, are malicious feels a bit like an overreaction.
Thanks for your important work, Greg!
I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).
Would a project like this be unfeasible due to the sheer amount of commits/day?
Thank you for all your excellent work!
> should be aware that future submissions from anyone with a umn.edu address should be by default-rejected
Are you not concerned these malicious "researches" will simply start using throwaway gmail addresses?
That’s not likely to work after a high profile incident like this, in the short term or the long term. Publication is, by design, a de-anonymizing process.
Are throwaway gmail addresses nearly as 'trusted'?
Putting the ethical question of the researcher aside, the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.
Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.
> This "I will ban you and revert all your stuff" retaliation seems emotional overaction.
Fool me once. Why should they waste their time with extra scrutiny next time? Somebody deliberately misled them, so that's it, banned from the playground. It's just a no-nonsense attitude, without which you'd get nothing done.
If you had a party in your house and some guest you don't know and whom you invited in assuming good faith, turned out to deliberately poop on the rug in your spare guest room while nobody was looking .. next time you have a party, what do you do? Let them in but keep an eye on them? Ask your friends to never let this guest alone? Or just simply to deny entrance, so that you can focus on having fun with people you trust and newcomers who have not shown any malicious intent?
I know what I'd do. Life is too short for BS.
18 replies →
The fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.
Basically, yes. The kernel review process does not catch 100% of intentionally introduced security flaws. It isn't perfect, and I don't think anyone is claiming that it is perfect. Whenever there's an indication that a group has been intentionally introducing security flaws, it is just common sense to go back and put a higher bar on reviewing it for security.
Not all kernel reviewers are being paid by their employer to review patches. Kernel reviews are "free" to the contributor because everyone operates on the assumption that every contributor wants to make Linux better by contributing high-quality patches. In this case, multiple people from the University have decided that reviewers' time isn't valuable (so it's acceptable to waste it) and that the quality of the Kernel isn't important (so it's acceptable to make it worse on purpose). A ban is a completely appropriate response to this, and reverting until you can review all the commits is an appropriate safety measure.
Whether or not this indicates flaws in the review process is a separate issue, but I don't know how you can justify not reverting all the commits. It'd be highly irresponsible to leave them in.
I guess what I am trying to get at is that this researcher's action does have its merit. This event does rise awareness of what sophisticated attacker group might try to do to kernel community. Admitting this would be the first step to hardening the kernel review process to prevent this kind of harm from happening again.
What I strongly disapprove of the researcher is that apparently no steps are taken to prevent real world consequences of malicious patches getting into kernel, I think the researcher should:
- Notify the kernel community promptly once malicious patches got past all review processes.
- Time these actions well such that malicious patches won't not get into a stable branch before they could be reverted.
----------------
Edit: reading the paper provided above, it seems that they did do both actions above. From the paper:
> Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.
So, unless the kernel maintenance team have another side of the story. The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.
23 replies →
> the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.
I don't think this necessarily follows. Rather it is fundamentally a resource allocation issue.
The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated. Instead it's a more nebulous standard of "reasonable assurance", where "reasonable" is a variable function of what must be sacrificed to perform a more thorough review, how critical the patch appears at first impression, and information relating to provenance of the patch.
By assimilating new information about the provenance of the patch (that it's coming from a group of people known to add obfuscated bugs), that standard rises, as it should.
Alternatively stated, there is some desired probability that an approved patch is bug-free (or at least free of any bugs that would threaten security). Presumably, the review process applied to a patch from an anonymous source (meaning the process you are implying suffers from a lack of confidence) is sufficient such that the Bayesian prior for a hypothetical "average anonymous" reviewed patch reaches the desired probability. But the provenance raises the likelihood that the source is malicious, which drops the probability such that the typical review for an untrusted source is not sufficient, and so a "proper review" is warranted.
> it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
That's hard to argue with, and ironically the point of the research at issue. It does imply that there's a need for some kind of "trust network" or interpersonal vetting to take the load off of code review.
1 reply →
In a perfect world, I would agree that the work of a researcher who's not an established figure in the kernel community would be met with a relatively high level of scrutiny in review.
But realistically, when you find out a submitter had malicious intent, I think it's 100% correct to revisit any and all associated submissions since it's quite a different thing to inspect code for correctness, style, etc. as you would in a typical code review process versus trying to find some intentionally obfuscated security hole.
And, frankly, who has time to pick the good from the bad in a case like this? I don't think it's an overreaction at all. IMO, it's a simplification to assume that all associated contributions may be tainted.
Why? Linux is not the state. There is no entitlement to rights or presumption of innocence.
Linux is set up to benefit the linux development community. If UMinn has basically no positive contributions, a bunch of neutral ones and some negative ones banning seems the right call.
Its not about fairness, its about if the hurts outweigh the benefits.
1 reply →
> Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process
I think the best way to make this expectation reality is putting in the work. The second best way is paying. Doing neither and holding the expectation is a way to exist certainly, but has no impact on the outcome.
> seems to suggest a lack of confidence in the kernel review process
The reviews were done by kernel developers who assumed good faith. That assumption has been proven false. It makes sense to review the patches again.
I mean, it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches. Review processes obviously aren't perfect, but usually patches aren't constructed to sneak sketchy code though. You'd usually approach a review in good faith.
Given that some patches may have made it though with holes, you pull them and re-approach them with a different mindset.
2 replies →
>I've never performed any meaningful debugging or postmortem ever in my life and might not even know how to program at all.
Just wanted you to know that I think you're an amazing programmer
This might not be on purpose. If you look at their article they're studying how to introduce bugs that are hard to detect not ones that are easy to detect.
> Thanks for the support.
THANK YOU! After reading the email chain, I have a much greater appreciation for the work you do for the community!
My deepest thanks for all your work, as well as for keeping the standards high and the integrity of the project intact!
I would be interested how many committers actually work at private and state intelligence?
you know what they say, curiosity killed the cat
Well, you or whoever was the responsible maintainer completely failed in reviewing these patches, which is your whole job as a maintainer.
Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.
Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.
On the contrary, it would be the easy, lazy way out for a maintainer to say “well this incident was a shame now let’s forget about it.” The extra work the kernel devs are putting in here should be commended.
In general, it is the wrong attitude to say, oh we had a security problem. What a fiasco! Everyone involved should be fired! With a culture like that, all you guarantee is that people cover up the security issues that inevitably occur.
Perhaps this incident actually does indicate that kernel code review procedures should be changed in some way. I don’t know, I’m not a kernel expert. But the right way to do that is with a calm postmortem after appropriate immediate actions are taken. Rolling back changes made by malicious actors is a very reasonable immediate action to take. After emotions have cooled, then it’s the right time to figure out if any processes should be changed in the future. And kernel devs putting in extra work to handle security incidents should be appreciated, not criticized for their imperfection.
Greg explicitly stated "Because of this, all submissions from this group must be reverted from the kernel tree and will need to be re-reviewed again to determine if they actually are a valid fix....I will be working with some other kernel developers to determine if any of these reverts were actually valid changes, were actually valid, and if so, will resubmit them properly later. For now, it's better to be safe."
If the IRB is any good the professor doesn't get that. Universities are publish or perish, and the IRB should force the withdrawal of all papers they submitted. This is might be enough to fire the professor with cause - including remove any tenure protection they might have - which means they get a bad reference.
I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time. (there is the possibility that these are good faith patches and someone in the linux community just hates this person - seems unlikely but until a proper independent investigation is done I'll leave that open.)
See page 9 of the already published paper:
https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i...
> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.
> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.
I'm not sure how it affects things, but I think it's important to clarify that they did not obtain the IRB-exempt letter in advance of doing the research, but after the ethically questionable actions had already been taken:
The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. ... We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
7 replies →
> We send the emails to the Linux communityand seek their feedback.
That's not really what they did.
They sent the patches, the patches where either merged or rejected.
And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose until they got caught and people started reverting all the patches from their university and banned the whole university.
31 replies →
The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.
How is this not human research? They experimented on the reactions of people in a non-controlled environment.
24 replies →
This is exactly what I would have said: this sort of research isn't 'human subjects research' and therefore is not covered by an IRB (whose job it is to prevent the university from legal risk, not to identify ethically dubious studies).
It is likely the professor involved here will be fired if they are pre-tenure, or sanctioned if post-tensure.
16 replies →
Communities aren’t people? What in the actual fuck is going on with this university’s IRB?!
10 replies →
> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.
Is there anyone on hand who could explain how what looks very much like a social engineering attack is not "human research"?
This is, at the very least, worth an investigation from an ethics committee.
First of all, this is completely irresponsible, what if the patches would've made their way into a real-life device? The paper does mention a process through which they tried to ensure that doesn't happen, but it's pretty finicky. It's one missed email or one bad timezone mismatch away from releasing the kraken.
Then playing the slander victim card is outright stupid, it hurts the credibility of actual victims.
The mandate of IRBs in the US is pretty weird but the debate about whether this was "human subject research" or not is silly, there are many other ethical and legal requirements to academic research besides Title 45.
> there are many other ethical and legal requirements to academic research besides Title 45.
Right. It's not just human subjects research. IRBs vet all kinds of research: polling, surveys, animal subjects research, genetics/embryo research (potentially even if not human/mammal), anything which could be remotely interpreted as ethically marginal.
1 reply →
I agree. I personally don't care if it meets the official definition of human subject research. It was unethical, regardless of whether it met the definition or not. I think the ban is appropriate and wouldn't lose any sleep if the ban also enacted by other open-source projects and communities.
It's a real shame because the university probably has good, experienced people who could contribute to various OSS projects. But how can you trust any of them when the next guy might also be running an IRB exempt security study.
2 replies →
>It's one missed email or one bad timezone mismatch away from releasing the kraken.
I don't think code commits to the Linux kernel make it to live systems that fast?
I do agree with the sentiment, though. It's grossly irresponsible to do that without asking at least someone in the kernel developer's group. People don't dig being used as lab rats, and now the whole uni is blocked. Well, tough shit.
1 reply →
> I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time
That'd be great, yup. And the linux kernel team should then strongly consider undoing the blanket ban, but not until this investigation occurs.
Interestingly, if all that happens, that _would_ be an intriguing data point in research on how FOSS teams deal with malicious intent, heh.
Personally, I think their data points should include "...and we had to explain ourselves to the FBI."
What about IEEE and the peer reviewers who didn't object to their publications?
I think the real problem is rooted more fundamentally in academia than it seems. And I think it has mostly to do with a lack of ethics!
I'm amazed this passed IRB. Consider the analogy:
We presented students with an education protocol designed to make a blind subset of them fail tests. Then measured if they failed the test to see if they independently learned the true meaning of the information.
Under any sane IRB you would need consent of the students. This is failure on so many levels.
(edit to fix typo)
I'm really not sure what the motive to lie is. You got caught with your hand in the cookie jar, time to explain what happened before they continue to treat you like a common criminal. Doing a pentest and refusing to state it was a pentest is mind boggling.
Has anyone from the "research" team commented and confirmed this was even them or a part of their research? It seems like the only defense is from people who did google-fu for a potentially outdated paper. At this point we can't even be sure if this isn't a genuinely malicious actor using comprimised credentials to introduce vulnerabilities.
It's also not a pen test. Pen testing is explicitly authorized, where you play the role as an attacker, with consent from your victim, in order to report security issues to your victim. This is just straight-up malicious behavior, where the "researchers" play the role as an attacker, without consent from their victim, for personal gain (in this case, publishing a paper).
Because of the nature of the research an argument can be made that it was like a bug bounty (not defending them just putting my argument) but they should have come clean when the patched was merged and told the community about the research or at least submitted the right patch.
Intentionally having bugs in kernel only you know about is very bad.
5 replies →
Hearing how you phrased it reminds me of a study that showed how parachutes do not in fact save lives (the study was more to show the consequences of extrapolating data, so the result should not be taken seriously):
https://www.bmj.com/content/363/bmj.k5094
The original referenced paper is also very good: http://elucidation.free.fr/parachuteBMJ.pdf (can't find a better formatted link, sorry)
Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials.Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
With the footnote: Contributors: GCSS had the original idea. JPP tried to talk him out of it. JPP did the first literature search but GCSS lost it. GCSS drafted the manuscript but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it serves him right
I liked this bit, from the footnotes: "Contributors: RWY had the original idea but was reluctant to say it out loud for years. In a moment of weakness, he shared it with MWY and BKN, both of whom immediately recognized this as the best idea RWY will ever have."
This is now my second favourite paper after the atlantic salmon in fmri
I still prefer the legal article examining the Fourth Amendment as it pertains to Jay-Z's 99 Problems.
http://pdf.textfiles.com/academics/lj56-2_mason_article.pdf
My favorite is "Possible Girls":
https://philpapers.org/archive/sinpg
I'm a big fan of Doug Zongker's excellent paper on chicken:
https://isotropic.org/papers/chicken.pdf
My gateway pub to this type of research was the Stork paper: https://pubmed.ncbi.nlm.nih.gov/14738551/
link?
2 replies →
Well part of the experiment is to see how deliberate malicious commits are handled. Banning is the result. They got what they wanted. Play stupid game. Win stupid pri[z]e.
Isn't trying to break security of the "entire internet" some kind of crime (despite whatever the excuse is)?
People got swatted for less.
Interestingly enough, this is more a case of being a dick. That is not illegal. If an AG does not levy a charge, no crime has been committed.
This does not at all mean the behavior in question should be condoned. This fails the sniff test worse than thioacetone.
Well said.
Nit: The expression is "Play stupid games, win stupid prizes."
As heard frequently on ASP, along with "Room Temperature Challenge."
https://twitter.com/UMNComputerSci/status/138496371833373082...
"The University of Minnesota Department of Computer Science & Engineering takes this situation extremely seriously. We have immediately suspended this line of research."
But this raises an obvious question: Doesn't Linux need better protection against someone intentionally introducing security vulnerabilities? If we have learned anything from the SolarWinds hack, it is that if there is a way to introduce a vulnerability then someone will do it, sooner or later. And they won't publish a paper about it, so that shouldn't be the only way to detect it!
So, it turns out that sometimes programmers introduce bugs into software. Sometimes intentionally, but much more commonly accidentally.
If you've got a suggestion of a way to catch those bugs, please be more specific about it. Just telling people that they need "better protection" isn't really useful or actionable advice, or anything that they weren't already aware of.
That question has been obvious for quite some time. It is always possible to introduce subtle vulnerabilities. Research has tried for decades to come up with a solution, to no real avail.
Assassinating the researchers doesn't help.
11 replies →
> Doesn't Linux need better protection against someone intentionally introducing security vulnerabilities?
Yes, it does.
Now, how do you do that other than having fallible people review things?
The problem with such experiment is that it can be a front. If you are a big entity, gov, or whatever, and you need to insert a vulnerability in the kernel, you can start a "research project". Then you try to inject it with this pretense, and if it fails, you can always say "my bad, it was for science".
I had a uni teacher who thought she was a genius because her research team powdered wikipedia with fake information while timing how long it took to remove them.
"Earth is center of universe" took 1000 years to remove from books, I'm not sure what her point was :D
Joke's on you - this was really sociology research on anger response levels of open source communities when confronted with things that look like bad faith.
WaitASecond...are you saying that this was an experiment to find out how the maintainers would react to being experimented on? ;)
Setting aside the ethical aspects which others have covered pretty thoroughly, they may have violated 18 U.S.C. §1030(a)(5) or (b). This law is infamously broad and intent is easier to "prove" than most people think, but #notalawyer #notlegaladvice. Please don't misinterpret this as a suggestion that they should or should not be prosecuted.
So, the patch was about a possible double-free, detected presumably from a bad static analyzer. Couldn't this patch have been done in good faith? That's not at all impossible.
However, the prior activity of submitting bad-faith code is indeed pretty shameful.
I'm not a linux kernel maintainer but it seems like the maintainers all agree it's extremely unlikely a static analyzer could be so wrong in so many different ways.
Interestingly, the Sokal Squared guy got banned from future research for "unauthorized human experimentation".
It's a different university, but I wonder if these people will see the same result.
I think this hasn't gone far enough. The university has shown that it is willing to allow its members to act in bad faith for their own interests, under the auspices of acting ethically for scientific reasons. The university itself cannot be trusted _ever again_.
Black list the whole lot from everything, everywhere. Black hole that place and nuke it from orbit.
Perhaps the Linux kernel team should actively support a Red Team to do this with a notification when it would be merged into the stable branch.
What would be the point? Of course people can miss things in code review. Yet the Linux developer base and user base has decided that generally an open submission policy has benefits that outweigh the risks.
Should every city park with a "no alcohol" policy conduct red teams on whether it's possible to smuggle alcohol in? Should police departments conduct red teams to see if people can get away with speeding?
Let's say that no one has ever seen someone speeding or drinking the park. But then someone announces that they just did it, got away with it, and the system isn't effective at catching folks that violate the policies. It might make sense to figure out how you could change the way the system works to stop people from violating the policy. One way to do that is to replicate the violation and see what measures could be introduced to decrease the likely-hood. I would say it is very much akin to the companies that test to see if your employees can be phished or the pen testers to see if you can be hacked. Other important things that people want to protect have these teams to make them a harder target and I think in the case of something as important as the Linux Kernel it might pay dividends.
Not that I approve of the methods, but why would an IRB be involved in a computer security study? IRBs are for human subjects research. If we have to run everything that looks like any kind of research through IRBs, the Western gambit on technical advantage is going to run into some very hard times.
The subjects were the kernel team. They should have had consent to be part of this study. It's like red team testing, someone somewhere has to know about it and consent to it.
How IEEE accepted this paper is a mystery, from twitter feeds, seems like at least one complaint was filled with IEEE, paper still was accepted.
It wasn’t a real experiment, it was a legitimate attempt to insert bugs into the code base and this professor was going to go on speaking tours to self promote and talk about how easy it was to crack Linux. If it looks like grift it’s probably grift. This was never about science.
> The professor gets exactly what they want here, no?
I don't think they're a professor are they? Says they're a PhD student?
Yet another reason to absolutely despise the culture within academia. The US Federal government is subsidizing a collection of pathologically toxic institutions, and this is one of many results, along with HR departments increasingly mimicking the campus tribalism.
That's quite a leap of logic you have going on there. How is the US Federal government at fault for this?
Who do you think subsidizes and guarantees student loan debt that allowed academic institutions to raise their prices at 4 times the rate of inflation?
To be clear, the quoted text in your post is presumably your own words, not a quote?
> The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.
What's preventing those bad actors from not using a UMN email address?
Nothing. However if they can't claim ownership of the drama they have caused it's not useful for research that's publishable so it does nix these idiots from causing further drama while working at this institution. For now.
They don't need to claim ownership of the drama to write the paper, in fact, my first thought was that they would specifically try to avoid taking ownership and instead write a paper "discovering" the vulnerability(ies).
> What's preventing those bad actors from not using a UMN email address?
Technically none, but by banning UMN submissions, the kernel team have sent an unambiguous message that their original behaviour is not cool. UMN's name has also been dragged through the mud, as it should be.
Prof Lu exercised poor judgement by getting people to submit malicious patches. To use further subterfuge knowing that you've been already been called out on it would be monumentally bad.
I don't know how far Greg has taken this issue up with the university, but I would expect that any reasonable university would give Lu a strong talking-to.
If they submit them from personal or anonymous email the patches may have come under more sucutiny.
They gain some trust comming from university email addresses
Exactly. Users contributing from the University addresses were borrowing against the reputation of the institution. That reputation is now destroyed and each individual contributor must build their own reputation to earn trust.
1 reply →
Nothing. I think the idea is 60% deterrence via collective punishment - "if we punish the whole university, people will be less likely to do this in future" - and 40% "we must do something, and this is something, therefore we must do it".
see https://lore.kernel.org/linux-nfs/YIAmy0zgrQW%2F44Hz@kroah.c...
If they just want to be jerks, yes. But they can't then use that type of "hiding" to get away with claiming it was done for a University research project as that's even more unethical than what they are doing now.
Were all of the commits from UMN emails GPG signed with countersigned/trusted keys?
How would you catch those?
Literally nothing. Instead of actual actions to improve the process it's only feel-good actions without any actual benefit to the kernel's security.
The point is to make it very obviously not worth it to conduct this kind of unethical research. I don't think UMN is going to be eager to have this kind of attention again. People could always submit bogus patches from random email addresses - this removes the ability to do it under the auspices of a university.
2 replies →
I think you're getting heavily downvoted with your comments on this submission because you seem to be missing a critical sociological dimension of assumed trust. If you submit a patch from a real name email, you get an extra dimension of human trust and likewise an extra dimension of human repercussions if your actions are deemed to be malicious.
You're criticizing the process, but the truth is that without a real name email and an actual human being's "social credit" to be burned, there's no proof these researchers would have achieved the same findings. The more interesting question to me is if they had used anonymous emails, would they have achieved the same results? If so, there might be some substance to your contrarian views that the process itself is flawed. But as it stands, I'm not sure that's the case.
Why? Well, look at what happened. The maintainers found out and blanket banned bad actors. Going to be a little hard to reproduce that research now, isn't it? Arbitraging societal trust for research doesn't just bring ethical challenges but /practical/ ones involving US law and standards for academic research.
9 replies →
You keep posting all over this discussion about how the Linux maintainers are making a poor choice and shooting the messenger.
What would you like them to do instead or in addition to this?
8 replies →
Well, it seems unlikely that any other universities will fund or support copy cat studies. And I don't mean in the top-down institutional sense I mean in the self-selecting sense. Students will not see messing with the linux kernel as being a viable research opportunity and will not do it. That doesn't seem to be 'feel-good without any actual benefit to the kernel's security'. Sounds like it could function as an effective deterent.
Isn't this reaction a bit like the emperor banishing anyone who tells him that his new clothes are fake? Are the maintainers upset that someone showed how easy it is to subvert kernel security?
It’s more like the emperor banning a group of people who put the citizens in danger just so they could show that it could be done. The researchers did something unethical and acted in a self-serving manner. It’s no surprise that someone would get kicked out of a community after seriously breaking the trust of that community.
More like the emperor banishing anyone who tries to sell him fake clothes to prove that the emperor will buy fake clothes.
The middle ground would be if the Emperor jailed the tailors of the New Clothes after he had shown off the clothes at the Parade, in front of the whole city.
Yeah,maybe it's fragile security.Fortunately,the problem has been found, and 'attackers' aren't real enemy.