Comment by ENOTTY
4 years ago
Later down thread from Greg K-H:
> Because of this, I will now have to ban all future contributions from your University.
Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.
EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.
In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].
Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.
[1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22
[2]: https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...
> I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018
New plan: Show up at Liu's house with a lock picking kit while he's away at work, pick the front door and open it, but don't enter. Send him a photo, "hey, just testing, bro! Legitimate security research!"
If they wanted to do security research, they could have done so in the form of asking the reviewers to help; send them a patch and ask 'Is this something you would accept?', instead of intentionally sending malicious commits and causing static on the commit tree and mailing lists.
Even better
Notify someone up the chain that you want to submit malicious patches, and ask them if they want to collaborate.
If your patches make it through, treat it as though they essentially just got red teamed, everyone who reviewed it and let it slip gets to have a nervous laugh and the commit gets rejected, everyone having learned something.
2 replies →
Wouldn't that draw more attention to the research patches, compared to a "normal" lkml patch? If you (as a maintainer) expected the patch to be malicious, wouldn't you be extra careful in reviewing it?
4 replies →
Dd they keep track of and submit a list of additions to revert after they managed to get it added?
From the looks of it they didn't even when it was heading out to stable releases?
That's just using the project with no interest in not causing issues.
1 reply →
This is funny, but not at all a good analogy. There's obviously not remotely as much public interest or value in testing the security of this professor's private home to justify invading his privacy for the public interest. On the other hand, if he kept dangerous things at home (say, BSL-4 material), then his house would need 24/7 security and you'd probably be able to justify testing it regularly for the public's sake. So the argument here comes down to which extreme you believe the Linux kernel is closer to.
> This is funny, but not at all a good analogy
Yeah, for one thing, to be a good analogy, rather than lockpicking without entering when he’s not home and leaving a note, you’d need to be an actual service worker for a trusted home service business and use that trust to enter when he is home, conduct sabotage, and not say anything until the sabotage is detected and traced back to you and cited in his cancelling the contract with the firm for which you work, and then cite the “research” rationale.
Of course, if you did that you would be both unemployed and facing criminal charges in short order.
1 reply →
Everyone has been saying "This affects software that runs on billions of machines and could cause untold amounts of damage and even loss of human life! What were the researchers thinking?!" and I guess a follow-up thought, which is that "Maintainers for software that runs on billions of machines, where bugs could cause untold amounts of damage and even loss of human life didn't have a robust enough system to prevent this?" never occurs to anyone. I don't understand why.
6 replies →
It wasn't intended to be serious. But on the other hand, he has now quite openly and publicly declared himself to be part of a group of people who mess around with security related things as a "test".
He shouldn't be surprised if it has some unexpected consequences to his own personal security, like some unknown third parties porting away his phone number(s) as a social engineering test, pen testing his office, or similar.
There's also not nearly as much harm as there is in wasting maintainer time and risking getting faulty patches merged.
Put a flaming bag of shit on the doorstep, ring the doorbell, and write a paper about the methods Liu uses to extinguish it?
I wouldn't be surprised if the good, conscientious members of the UMN community showed up at his office (or home) door to explain, in vivid detail, the consequences of doing unethical research.
The actual equivalent would be to steal his computer, wait a couple days to see his reaction, get a paper published, then offer to return the computer.
> Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.
That's the university's problem to fix.
If this experience doesn't change not only the behavior of U of M's IRB but inform the behavior of every other IRB, then nothing at all is learned from this experience.
Unless both the professors and leadership from the IRB aren't having an uncomfortable lecture in the chancellor's office then nothing at all changes.
What's the recourse for them though? Just beg to have the decision reversed?
The main thing you want here is a demonstration that they realize they fucked up, realize the magnitude of the fuckup, and have done something reasonable to lower the risk of it happening again, hopefully very low.
Given that the professor appears to be a frequent flyer with this, the kernel folks banning him and the university prohibiting him from using Uni resources for anything kernel related seems reasonable and gets the point across.
Expel the students and fire the professor. That will demonstrate their commitment to high ethical standards.
10 replies →
The comment about IRB —- institutional research board —- is clear, I think.
2 replies →
Probably that, combined with "we informed the professor of {serious consequences} should this happen again".
Well, yes? Seems like recourse in their case would be to make a convincing plea or plan to rectify the problem that satisfies decision makers in the linux project.
This is not responsible research. This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.
There are a lot of people to feel bad for, but none is at the University of Minnesota. Think of the Austrians.
No, it's totally okay to feel sorry for good, conscientious researchers and students at the University of Minnesota who have been working on the kernel in good faith. It's sad that the actions of irresponsible researchers and associated review boards affect people who had nothing to do with professor Lu's research.
It's not wrong for the kernel community to decide to blanket ban contributions from the university. It obviously makes sense to ban contributions from institutions which are known to send intentionally buggy commits disguised as fixes. That doesn't mean you can't feel bad for the innocent students and professors.
> good, conscientious researchers and students at the University of Minnesota who have been working on the kernel in good faith
All you have to do is look at the reverted patches to see that these are either mythical or at least few and far in between.
4 replies →
> This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.
This analogy is invalid, because:
1. The experiment is not on live, deployed, versions of the kernel.
2. There are mechanisms in place for preventing actual merging of the faulty patches.
3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.
All of the above is not true for the in-flight airline.
However - I'm not claiming the experiment was not ethically faulty. Certainly, the U Minnesota IRB needs to issue a report and an explanation on its involvement in this matter.
> 1. The experiment is not on live, deployed, versions of the kernel.
The patches were merged and the email thread discusses that the patches made it to the stable tree. Some (many?) distributions of Linux have and run from stable.
> 2. There are mechanisms in place for preventing actual merging of the faulty patches.
Those mechanisms failed.
> 3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.
Arguably. But I think this is a weak argument.
4 replies →
You seem to think this experiment was performed on the Linux kernel itself. It was not. This research was performed on human beings.
It's irrelevant whether any bugs were ultimately introduced into the kernel. The fact is the researchers deliberately abused the trust of other human beings in order to experiment on them. A ban on further contributions is a very light punishment for such behavior.
1 reply →
How would you feel about researchers delivering known-faulty-under-some-conditions AoA sensors to Boeing, just to see if Boeing's QA process would catch those errors before final assembly?
6 replies →
It's important to note that they used temporary emails for the patches in this research. It's detailed in the paper.
The main problem is that they have (so far) refused to explain in detail how the patches where reviewed and how. I have not gotten any links to any lkml post even after Kangjie Lu personally emailed me to address any concerns.
Seems like a bit of a strong response. Universities are large places with lots of professors and people with different ideas, opinions, views, and they don't work in concert, quite the opposite. They're not some corporation with some unified goal or incentives.
I like that. That's what makes universities interesting to me.
I don't like the standard here of of penalizing or lumping everyone there together, regardless of they contribute in the past, now, in the future or not.
The goal is not penalizing or lumping everyone together. The goal is to have the issue fixed in the most effective manner. It's not the Linux team's responsibility to allow contributions from some specific university, it's the university's. This measure enforces that responsibility. If they want access, they should rectify.
I would then say that the goal and the choice aren't aligned because "penalizing or lumping everyone together" is exactly the choice made.
8 replies →
One way to get everyone in a university on the same page is to punish them all for the bad actions of a few. It appears like this won't work here because nobody else is contributing and so they won't notice.
It's not the number of people directly affected that will matter, it's the reputational problems of "umn.edu's CS department got the entire UMN system banned from submitting to the Linux kernel and probably some other open source projects."
And anyone without much power to effect change SOL.
I know the kernel doesn't need anyone's contributions anyhow, but as a matter of policy this seems like a bad one.
This was approved by the university ethics board so if trust of the university is by part because the actions of the students need to pass an ethics bar it makes sense to remove that trust until the ethics committee has shown that they have improved.
The ethics board is most likely not at fault here. They were simply lied to, if we take Lu's paper serious. I would just expell the 3 malicious actors here, the 2 students and the Prof who approved it. I don't see any fault in Wang yet.
The damage is not that big. Only 4 committers to linux in the last decade, 2 of them, the students, with malicious backdoors, the Prof not with bad code but bad ethics, and the 4th, the Ass Prof did good patches and already left them.
1 reply →
I'd concur: the university is the wrong unit-of-ban.
For example: what happens when the students graduate- does the ban follow them to any potential employers? Or if the professor leaves for another university to continue this research?
Does the ban stay with UMN, even after everyone involved left? Or does it follow the researcher(s) to a new university, even if the new employer had no responsibility for them?
On the other hand: What obligation do the Linux kernal maintainers have to allow UMN staff and students to contribute to their project?
> Does the ban stay with UMN, even after everyone involved left?
It stays with the university until the university provides a good reason to believe they should not be particularly untrusted.
If they use a different email but someone knows they work at the university?
It's a chain that gets really unpleasant.
It's the university that allowed the research to take place. It's the university's responsibility to fix their own organisation's issues. The kernel has enough on their plate than to have to figure out who at the university is trustworthy and who isn't considering their IRB is clearly flying blind.
that is completely irrelevant. they are acting under the university, and their "Research" is backed by university and approved by university's department.
if university has a problem, then they should first look into managing this issue at their end, or force people to use personal email ids for such purposes
I don't feel sorry at all. If you want to contribute from there, show that the rogue professor and their students have been prevented from doing further malicious contributions (that is probably at least: from doing any contribution at all during a quite long period -- and that is fair against repeated infractions), and I'm sure that you will be able to contribute back again under the University umbrella.
If you don't manage to reach that goal, too bad, but you can contribute on a personal capacity, and/or go work elsewhere.
How could a single student or professor possibly achieve that? Under the banner of "academic freedom" it is very hard to get someone fired because you don't like their research.
It sounds like you're making impossible demands of unrelated people, while doing nothing to solve the actual problem because the perpetrators now know to just create throwaway emails when submitting patches.
It definitely would suck to be someone at UMN doing legitimate work, but I don't think it's reasonable to ask maintainers to also do a background check on who the contributor is and who they're advised by.
I find it hard to believe this research passed IRB.
It didn't. Rather, it didn't until after it had been conduct.
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
How thorough is IRB review? My gut feeling is that these are not necessarily the most conscientious or informed bodies. Add into the mix a proposal that conceals the true nature of what's happening.
(All of this ASSUMING that the intent was as described in the thread.)
It varies a lot. A professor I worked for was previously at a large company in an R&D setting. He dealt with 15-20 different IRB's through various research partnerships, and noted Iowa State (our university) as having the most stringent requirements he had encountered. In other universities, it was pretty simple to submit and get approval without notable changes to the research plan. If they were unsure on something, they would ask a lot of questions.
I worked on a number of studies through undergrad and grad school, mostly involving having people test software. The work to get a study approved was easily 20 hours for a simple "we want to see how well people perform tasks in the custom software we developed. They'll come to the university and use our computer to avoid security concerns about software security bugs". You needed a script of everything you would say, every question you would ask, how the data would be collected, analyzed, and stored securely. Data retention and destruction policies had to be noted. The key linking a person's name and their participant ID had to be stored separately. How would you recruit participants, the exact poster or email you intend to send out. The reading level of the instructions and the aptitude of audience were considered (so academic mumbo jumbo didn't confuse participants).
If you check the box that you'll be deceiving participants, there was another entire section to fill out detailing how they'd be deceived, why it was needed for the study, etc. Because of past unethical experiments in the academic world, there is a lot of scrutiny and you typically have to reveal the deception in a debriefing after the completion of the study.
Once a study was accepted (in practice, a multiple month process), you could make modifications with an order of magnitude less effort. Adding questions that don't involve personal information of the participant is a quick form and an approval some number of days later.
If you remotely thought you'd need IRB approval, you started a conversation with the office and filled out some preliminary paperwork. If it didn't require approval, you'd get documentation stating such. This protects the participants, university, and professor from issues.
--
They took it really seriously. I'm familiar with one study where participants would operate a robot outside. An IRB committee member asked what would happen if a bee stung the participant? If I remember right, the resolution was an epipen and someone trained in how to use it had to be present during the session.
They are probably more familiar with medical research and the types of things that go wrong there. Bad ethics in medical situations is well understood, including psychology. However it is hard to figure out how a mechanical engineer could violate ethics.
3 replies →
seems extreme. one unethical researcher blocks work for others just because they happen to work at the same employer? they might not even know the author of the paper...
The university reviewed the "study" and said it was acceptable. From the email chain, it looks like they've already complained to the university multiple times, and have apparently been ignored. Banning anyone at the university from contributing seems like the only way to handle it since they can't trust the institution to ensure its students are doing unethical experiments.
Plus, it sets a precedent: if your university condones this kind of "research", you will have to face the consequences too...
Well, the decision can always be reversed, but on the outset I would say banning the entire university and publicly naming them is a good start. I don't think this kind of "research" is ethical, and the issue needs to be raised. Banning them is a good opener to engage the instiution in a dialogue.
It seems fair enough to me. They were curious to see what happens, this happens. Giving them a free pass because they're a university would be artificially skewing the results of the research.
Low trust and negative trust should be fairly obvious costs to messing with a trust model - you could easily argue this is working as intended.
They reported unethical behavior to the university and the university failed to prevent it from happening again.
It is an extreme response to an extreme problem. If the other researchers don't like the situation? They are free to raise the problem to the university and have the university clean up the mess they obviously have.
Well, shit happens. Imaging doctors working in organ transplants, and one of them damages trust of people by selling access to organs to rich patients. Of course that damages the field for everyone. And to deal with such issues, doctors have some ethics code, and in many countries associations which will sanction bad eggs. Perhaps scientists need something like that, too?
The University approved this research. How can one trust anything from that university now?
It approved the research, which I don't find objectionable.
The objectionable part is that the group allegedly continued after having been told to stop by the kernel developers.
3 replies →
That's not really how it works. Nobody's out there 'approving' research (well, not seemingly small projects like this), especially at the university level. Professors (all the way down to PhD students!) are usually left to do what they like, unless there are specific ethical concerns that should be put before a review panel. I suppose you could argue that this work should have been brought before the ethics committee, but it probably wasn't, and in CS there isn't a stringent process like there is in e.g. psychology or biology.
6 replies →
Forking the kernel should be sufficient for research.
Not if the research involves the reviewing aspects of open source projects.
Apparently they aren't doing human experiments, it's only processes and such. So they can easily emulate the processes in-house too!
This research is specifically about getting patches accepted into open source projects, so that wouldn't work at all.
For other research happening in the university. This particular research is trivial anyway, see https://news.ycombinator.com/item?id=26888417
Not a big loss: these professors likely hate open source. [edit: they do not. See child comments.]
They are conducting research to demonstrate that it is easy to introduce bugs in open source...
(whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)
[removed this ranting that does not apply since they are contributing a lot to the kernel in good ways too]
> Not a big loss: these professors likely hate open source.
> They are conducting research to demonstrate that it is easy to introduce bugs in open source...
That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.
(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)
> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards
How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.
Ascribing a salutary motive to sabotage is just as dangerous as assuming a pernicious motive. Suggesting that people "would" likely follow one course of action or another is also dangerous: it is the oldest form of sophistry, the eikos argument of Corax and Tisias. After all, if publishing research rules out pernicious motives, academia suddenly becomes the best possible cover for espionage and state-sanctioned sabotage designed to undermine security.
The important thing is not to hunt for motives but to identify and quarantine the saboteurs to prevent further sabotage. Complaining to the University's research ethics board might help, because, regardless of intent, sabotage is still sabotage, and that is unethical.
The difference between:
"Dear GK-H: I would like to have my students test the security of the kernel development process. Here is my first stab at a protocol, can we work on this?"
and
"We're going to see if we can introduce bugs into the Linux kernel, and probably tell them afterwards"
is the difference between white-hat and black-hat.
2 replies →
Auditability is at the core of its advantage over closed development.
Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.
To adress your first critic: benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm. Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.
10 replies →
> It's likely a university with professors that hate open source.
This is a ridiculous conclusion. I do agree with the kernel maintainers here, but there is no way to conclude that the researchers in question "hate open source", and certainly not that such an attitude is shared by the university at large.
Seems like a reasonable default assumption to me, until the people repeatedly attempting to sabotage the open source community condescend to -- you know -- stop doing it and then explain wtf they are thinking.
[Edit: they seem to truly love OSS. See child comments. Sorry for my erroneous judgement. It reminded too much of anti-opensource FUD, I'm probably having PTSD of that time...]
I fixed my sentence.
I still think that these professors, either genuinely or by lack of willingness, do not understand the mechanism by which free software warrants its greater quality compared to proprietary ones (which is a fact).
They just remind me the good old days of FUD against open source by Microsoft and its minions...
6 replies →
At least in the university where I did my studies, each professor had their own way of thinking and you could not group them into any one basket.
Fair point.
I'll just leave my comment as it is. The university administration still bears responsibility in the fact that they waived the IRB.
2 replies →
> the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards
That's not true at all. There are many internet-critical projects with tons of holes that are not found for decades, because nobody except the core team ever looks at the code. You have to actually write tests, do fuzzing, static/memory analysis, etc to find bugs/security holes. Most open source projects don't even have tests.
Assuming people are always looking for bugs in FOSS projects is like assuming people are always looking for code violations in skyscrapers, just because a lot of people walk around them.
> (whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)
Which is why there have never been multi-year critical security vulnerabilities in FOSS software.... right?
Sarcasm aside, because of how FOSS software is packaged on Linux we've seen critical security bugs introduced by package maintainers into software that didn't have them!
You need to compare what happens with vulnerabilities in OSS vs in proprietary.
A maintainer pakage is just one more open source software (thus also in need of reviews and audits)... which is why some people prefer upstream-source-based distribs, such as Gentoo, Arch when you use git-based AUR packages, or LFS for the hardcore fans.
1 reply →