“They introduce kernel bugs on purpose”

4 years ago (lore.kernel.org)

The professor gets exactly what they want here, no?

"We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".

That'll be a fun paper to write, no doubt.

Additional context:

* One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith[1].

Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits [2].

The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.

As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?

* Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. [3]

[1]: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2

[2]: https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

[3]: https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...

  • Thanks for the support.

    I also now have submitted a patch series that reverts the majority of all of their contributions so that we can go and properly review them at a later point in time: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

    • Just wanted to say thanks for your work!

      As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".

      Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.

      So thanks. Keep it up.

      7 replies →

    • A lot of people are talking about the ethical aspects, but could you talk about the security implications of this attack?

      From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have already reached the stable trees.

      Apologies in advance if my questions are off the mark, but what does this mean in practice?

      1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?

      2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?

      3. Will there be a post-mortem for this attack/attempted attack?

      71 replies →

    • As an Alumni of the University of Minnesota's program I am appalled this was even greenlit. It reflects poorly on all graduates of the program, even those uninvolved. I am planning to email the department head with my disapproval as an alumni, and I am deeply sorry for the harm this caused.

      56 replies →

    • I hope they take this bad publicity and stop (rather than escalating stupidity by using non university emails).

      What a joke - not sure how they can rationalize this as valuable behavior.

      18 replies →

    • I would implore you to maintain the ban, no matter how hard the university tries to make ammends. You sent a very clear message that this type of behavior will not be tolerated, and organizations should take serious measures to prevent malicious activities taking place under their purview. I commend you for that. Thanks for your hard work and diligence.

      11 replies →

    • I have to ask: were they not properly reviewed when they were first merged?

      Also to assume _all_ commits made by UMN, beyond what's been disclosed in the paper, are malicious feels a bit like an overreaction.

    • Thanks for your important work, Greg!

      I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).

      Would a project like this be unfeasible due to the sheer amount of commits/day?

    • > should be aware that future submissions from anyone with a umn.edu address should be by default-rejected

      Are you not concerned these malicious "researches" will simply start using throwaway gmail addresses?

      2 replies →

    • Putting the ethical question of the researcher aside, the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

      Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.

      While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.

      56 replies →

    • This might not be on purpose. If you look at their article they're studying how to introduce bugs that are hard to detect not ones that are easy to detect.

    • > Thanks for the support.

      THANK YOU! After reading the email chain, I have a much greater appreciation for the work you do for the community!

    • My deepest thanks for all your work, as well as for keeping the standards high and the integrity of the project intact!

    • Well, you or whoever was the responsible maintainer completely failed in reviewing these patches, which is your whole job as a maintainer.

      Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.

      Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.

      2 replies →

  • If the IRB is any good the professor doesn't get that. Universities are publish or perish, and the IRB should force the withdrawal of all papers they submitted. This is might be enough to fire the professor with cause - including remove any tenure protection they might have - which means they get a bad reference.

    I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time. (there is the possibility that these are good faith patches and someone in the linux community just hates this person - seems unlikely but until a proper independent investigation is done I'll leave that open.)

    • See page 9 of the already published paper:

      https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i...

      > We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.

      94 replies →

    • This is, at the very least, worth an investigation from an ethics committee.

      First of all, this is completely irresponsible, what if the patches would've made their way into a real-life device? The paper does mention a process through which they tried to ensure that doesn't happen, but it's pretty finicky. It's one missed email or one bad timezone mismatch away from releasing the kraken.

      Then playing the slander victim card is outright stupid, it hurts the credibility of actual victims.

      The mandate of IRBs in the US is pretty weird but the debate about whether this was "human subject research" or not is silly, there are many other ethical and legal requirements to academic research besides Title 45.

      7 replies →

    • > I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time

      That'd be great, yup. And the linux kernel team should then strongly consider undoing the blanket ban, but not until this investigation occurs.

      Interestingly, if all that happens, that _would_ be an intriguing data point in research on how FOSS teams deal with malicious intent, heh.

      1 reply →

    • What about IEEE and the peer reviewers who didn't object to their publications?

      I think the real problem is rooted more fundamentally in academia than it seems. And I think it has mostly to do with a lack of ethics!

    • I'm amazed this passed IRB. Consider the analogy:

      We presented students with an education protocol designed to make a blind subset of them fail tests. Then measured if they failed the test to see if they independently learned the true meaning of the information.

      Under any sane IRB you would need consent of the students. This is failure on so many levels.

      (edit to fix typo)

  • I'm really not sure what the motive to lie is. You got caught with your hand in the cookie jar, time to explain what happened before they continue to treat you like a common criminal. Doing a pentest and refusing to state it was a pentest is mind boggling.

    Has anyone from the "research" team commented and confirmed this was even them or a part of their research? It seems like the only defense is from people who did google-fu for a potentially outdated paper. At this point we can't even be sure if this isn't a genuinely malicious actor using comprimised credentials to introduce vulnerabilities.

    • It's also not a pen test. Pen testing is explicitly authorized, where you play the role as an attacker, with consent from your victim, in order to report security issues to your victim. This is just straight-up malicious behavior, where the "researchers" play the role as an attacker, without consent from their victim, for personal gain (in this case, publishing a paper).

      6 replies →

  • Hearing how you phrased it reminds me of a study that showed how parachutes do not in fact save lives (the study was more to show the consequences of extrapolating data, so the result should not be taken seriously):

    https://www.bmj.com/content/363/bmj.k5094

    • The original referenced paper is also very good: http://elucidation.free.fr/parachuteBMJ.pdf (can't find a better formatted link, sorry)

      Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials.Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

      With the footnote: Contributors: GCSS had the original idea. JPP tried to talk him out of it. JPP did the first literature search but GCSS lost it. GCSS drafted the manuscript but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it serves him right

    • I liked this bit, from the footnotes: "Contributors: RWY had the original idea but was reluctant to say it out loud for years. In a moment of weakness, he shared it with MWY and BKN, both of whom immediately recognized this as the best idea RWY will ever have."

  • Well part of the experiment is to see how deliberate malicious commits are handled. Banning is the result. They got what they wanted. Play stupid game. Win stupid pri[z]e.

  • But this raises an obvious question: Doesn't Linux need better protection against someone intentionally introducing security vulnerabilities? If we have learned anything from the SolarWinds hack, it is that if there is a way to introduce a vulnerability then someone will do it, sooner or later. And they won't publish a paper about it, so that shouldn't be the only way to detect it!

    • So, it turns out that sometimes programmers introduce bugs into software. Sometimes intentionally, but much more commonly accidentally.

      If you've got a suggestion of a way to catch those bugs, please be more specific about it. Just telling people that they need "better protection" isn't really useful or actionable advice, or anything that they weren't already aware of.

    • That question has been obvious for quite some time. It is always possible to introduce subtle vulnerabilities. Research has tried for decades to come up with a solution, to no real avail.

      12 replies →

    • > Doesn't Linux need better protection against someone intentionally introducing security vulnerabilities?

      Yes, it does.

      Now, how do you do that other than having fallible people review things?

  • The problem with such experiment is that it can be a front. If you are a big entity, gov, or whatever, and you need to insert a vulnerability in the kernel, you can start a "research project". Then you try to inject it with this pretense, and if it fails, you can always say "my bad, it was for science".

  • I had a uni teacher who thought she was a genius because her research team powdered wikipedia with fake information while timing how long it took to remove them.

    "Earth is center of universe" took 1000 years to remove from books, I'm not sure what her point was :D

  • Joke's on you - this was really sociology research on anger response levels of open source communities when confronted with things that look like bad faith.

    • WaitASecond...are you saying that this was an experiment to find out how the maintainers would react to being experimented on? ;)

  • Setting aside the ethical aspects which others have covered pretty thoroughly, they may have violated 18 U.S.C. §1030(a)(5) or (b). This law is infamously broad and intent is easier to "prove" than most people think, but #notalawyer #notlegaladvice. Please don't misinterpret this as a suggestion that they should or should not be prosecuted.

  • So, the patch was about a possible double-free, detected presumably from a bad static analyzer. Couldn't this patch have been done in good faith? That's not at all impossible.

    However, the prior activity of submitting bad-faith code is indeed pretty shameful.

    • I'm not a linux kernel maintainer but it seems like the maintainers all agree it's extremely unlikely a static analyzer could be so wrong in so many different ways.

  • Interestingly, the Sokal Squared guy got banned from future research for "unauthorized human experimentation".

    It's a different university, but I wonder if these people will see the same result.

  • I think this hasn't gone far enough. The university has shown that it is willing to allow its members to act in bad faith for their own interests, under the auspices of acting ethically for scientific reasons. The university itself cannot be trusted _ever again_.

    Black list the whole lot from everything, everywhere. Black hole that place and nuke it from orbit.

  • Perhaps the Linux kernel team should actively support a Red Team to do this with a notification when it would be merged into the stable branch.

    • What would be the point? Of course people can miss things in code review. Yet the Linux developer base and user base has decided that generally an open submission policy has benefits that outweigh the risks.

      Should every city park with a "no alcohol" policy conduct red teams on whether it's possible to smuggle alcohol in? Should police departments conduct red teams to see if people can get away with speeding?

      1 reply →

  • Not that I approve of the methods, but why would an IRB be involved in a computer security study? IRBs are for human subjects research. If we have to run everything that looks like any kind of research through IRBs, the Western gambit on technical advantage is going to run into some very hard times.

    • The subjects were the kernel team. They should have had consent to be part of this study. It's like red team testing, someone somewhere has to know about it and consent to it.

      1 reply →

  • It wasn’t a real experiment, it was a legitimate attempt to insert bugs into the code base and this professor was going to go on speaking tours to self promote and talk about how easy it was to crack Linux. If it looks like grift it’s probably grift. This was never about science.

  • > The professor gets exactly what they want here, no?

    I don't think they're a professor are they? Says they're a PhD student?

  • Yet another reason to absolutely despise the culture within academia. The US Federal government is subsidizing a collection of pathologically toxic institutions, and this is one of many results, along with HR departments increasingly mimicking the campus tribalism.

  • > The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.

    What's preventing those bad actors from not using a UMN email address?

    • Nothing. However if they can't claim ownership of the drama they have caused it's not useful for research that's publishable so it does nix these idiots from causing further drama while working at this institution. For now.

      1 reply →

    • > What's preventing those bad actors from not using a UMN email address?

      Technically none, but by banning UMN submissions, the kernel team have sent an unambiguous message that their original behaviour is not cool. UMN's name has also been dragged through the mud, as it should be.

      Prof Lu exercised poor judgement by getting people to submit malicious patches. To use further subterfuge knowing that you've been already been called out on it would be monumentally bad.

      I don't know how far Greg has taken this issue up with the university, but I would expect that any reasonable university would give Lu a strong talking-to.

    • If they submit them from personal or anonymous email the patches may have come under more sucutiny.

      They gain some trust comming from university email addresses

      2 replies →

    • Nothing. I think the idea is 60% deterrence via collective punishment - "if we punish the whole university, people will be less likely to do this in future" - and 40% "we must do something, and this is something, therefore we must do it".

  • Isn't this reaction a bit like the emperor banishing anyone who tells him that his new clothes are fake? Are the maintainers upset that someone showed how easy it is to subvert kernel security?

    • It’s more like the emperor banning a group of people who put the citizens in danger just so they could show that it could be done. The researchers did something unethical and acted in a self-serving manner. It’s no surprise that someone would get kicked out of a community after seriously breaking the trust of that community.

    • Yeah,maybe it's fragile security.Fortunately,the problem has been found, and 'attackers' aren't real enemy.

Later down thread from Greg K-H:

> Because of this, I will now have to ban all future contributions from your University.

Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.

EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.

In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].

Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.

[1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22

[2]: https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...

[3]: https://www-users.cs.umn.edu/~kjlu/

[4]: http://cobweb.cs.uga.edu/~wenwen/

  • > I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018

    New plan: Show up at Liu's house with a lock picking kit while he's away at work, pick the front door and open it, but don't enter. Send him a photo, "hey, just testing, bro! Legitimate security research!"

    • If they wanted to do security research, they could have done so in the form of asking the reviewers to help; send them a patch and ask 'Is this something you would accept?', instead of intentionally sending malicious commits and causing static on the commit tree and mailing lists.

      10 replies →

    • This is funny, but not at all a good analogy. There's obviously not remotely as much public interest or value in testing the security of this professor's private home to justify invading his privacy for the public interest. On the other hand, if he kept dangerous things at home (say, BSL-4 material), then his house would need 24/7 security and you'd probably be able to justify testing it regularly for the public's sake. So the argument here comes down to which extreme you believe the Linux kernel is closer to.

      11 replies →

    • Put a flaming bag of shit on the doorstep, ring the doorbell, and write a paper about the methods Liu uses to extinguish it?

    • I wouldn't be surprised if the good, conscientious members of the UMN community showed up at his office (or home) door to explain, in vivid detail, the consequences of doing unethical research.

    • The actual equivalent would be to steal his computer, wait a couple days to see his reaction, get a paper published, then offer to return the computer.

  • > Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.

    That's the university's problem to fix.

    • If this experience doesn't change not only the behavior of U of M's IRB but inform the behavior of every other IRB, then nothing at all is learned from this experience.

      Unless both the professors and leadership from the IRB aren't having an uncomfortable lecture in the chancellor's office then nothing at all changes.

  • This is not responsible research. This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.

    There are a lot of people to feel bad for, but none is at the University of Minnesota. Think of the Austrians.

    • No, it's totally okay to feel sorry for good, conscientious researchers and students at the University of Minnesota who have been working on the kernel in good faith. It's sad that the actions of irresponsible researchers and associated review boards affect people who had nothing to do with professor Lu's research.

      It's not wrong for the kernel community to decide to blanket ban contributions from the university. It obviously makes sense to ban contributions from institutions which are known to send intentionally buggy commits disguised as fixes. That doesn't mean you can't feel bad for the innocent students and professors.

      5 replies →

    • > This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.

      This analogy is invalid, because:

      1. The experiment is not on live, deployed, versions of the kernel.

      2. There are mechanisms in place for preventing actual merging of the faulty patches.

      3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.

      All of the above is not true for the in-flight airline.

      However - I'm not claiming the experiment was not ethically faulty. Certainly, the U Minnesota IRB needs to issue a report and an explanation on its involvement in this matter.

      14 replies →

  • It's important to note that they used temporary emails for the patches in this research. It's detailed in the paper.

    The main problem is that they have (so far) refused to explain in detail how the patches where reviewed and how. I have not gotten any links to any lkml post even after Kangjie Lu personally emailed me to address any concerns.

  • Seems like a bit of a strong response. Universities are large places with lots of professors and people with different ideas, opinions, views, and they don't work in concert, quite the opposite. They're not some corporation with some unified goal or incentives.

    I like that. That's what makes universities interesting to me.

    I don't like the standard here of of penalizing or lumping everyone there together, regardless of they contribute in the past, now, in the future or not.

    • The goal is not penalizing or lumping everyone together. The goal is to have the issue fixed in the most effective manner. It's not the Linux team's responsibility to allow contributions from some specific university, it's the university's. This measure enforces that responsibility. If they want access, they should rectify.

      9 replies →

    • One way to get everyone in a university on the same page is to punish them all for the bad actions of a few. It appears like this won't work here because nobody else is contributing and so they won't notice.

      2 replies →

    • This was approved by the university ethics board so if trust of the university is by part because the actions of the students need to pass an ethics bar it makes sense to remove that trust until the ethics committee has shown that they have improved.

      2 replies →

    • I'd concur: the university is the wrong unit-of-ban.

      For example: what happens when the students graduate- does the ban follow them to any potential employers? Or if the professor leaves for another university to continue this research?

      Does the ban stay with UMN, even after everyone involved left? Or does it follow the researcher(s) to a new university, even if the new employer had no responsibility for them?

      3 replies →

    • It's the university that allowed the research to take place. It's the university's responsibility to fix their own organisation's issues. The kernel has enough on their plate than to have to figure out who at the university is trustworthy and who isn't considering their IRB is clearly flying blind.

    • that is completely irrelevant. they are acting under the university, and their "Research" is backed by university and approved by university's department.

      if university has a problem, then they should first look into managing this issue at their end, or force people to use personal email ids for such purposes

  • I don't feel sorry at all. If you want to contribute from there, show that the rogue professor and their students have been prevented from doing further malicious contributions (that is probably at least: from doing any contribution at all during a quite long period -- and that is fair against repeated infractions), and I'm sure that you will be able to contribute back again under the University umbrella.

    If you don't manage to reach that goal, too bad, but you can contribute on a personal capacity, and/or go work elsewhere.

    • How could a single student or professor possibly achieve that? Under the banner of "academic freedom" it is very hard to get someone fired because you don't like their research.

      It sounds like you're making impossible demands of unrelated people, while doing nothing to solve the actual problem because the perpetrators now know to just create throwaway emails when submitting patches.

  • It definitely would suck to be someone at UMN doing legitimate work, but I don't think it's reasonable to ask maintainers to also do a background check on who the contributor is and who they're advised by.

  • I find it hard to believe this research passed IRB.

  • seems extreme. one unethical researcher blocks work for others just because they happen to work at the same employer? they might not even know the author of the paper...

    • The university reviewed the "study" and said it was acceptable. From the email chain, it looks like they've already complained to the university multiple times, and have apparently been ignored. Banning anyone at the university from contributing seems like the only way to handle it since they can't trust the institution to ensure its students are doing unethical experiments.

      1 reply →

    • Well, the decision can always be reversed, but on the outset I would say banning the entire university and publicly naming them is a good start. I don't think this kind of "research" is ethical, and the issue needs to be raised. Banning them is a good opener to engage the instiution in a dialogue.

      1 reply →

    • They reported unethical behavior to the university and the university failed to prevent it from happening again.

    • It is an extreme response to an extreme problem. If the other researchers don't like the situation? They are free to raise the problem to the university and have the university clean up the mess they obviously have.

    • Well, shit happens. Imaging doctors working in organ transplants, and one of them damages trust of people by selling access to organs to rich patients. Of course that damages the field for everyone. And to deal with such issues, doctors have some ethics code, and in many countries associations which will sanction bad eggs. Perhaps scientists need something like that, too?

  • Not a big loss: these professors likely hate open source. [edit: they do not. See child comments.]

    They are conducting research to demonstrate that it is easy to introduce bugs in open source...

    (whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)

    [removed this ranting that does not apply since they are contributing a lot to the kernel in good ways too]

    • > Not a big loss: these professors likely hate open source.

      > They are conducting research to demonstrate that it is easy to introduce bugs in open source...

      That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.

      (Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)

      > whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

      How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.

      15 replies →

    • > It's likely a university with professors that hate open source.

      This is a ridiculous conclusion. I do agree with the kernel maintainers here, but there is no way to conclude that the researchers in question "hate open source", and certainly not that such an attitude is shared by the university at large.

      8 replies →

    • > the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

      That's not true at all. There are many internet-critical projects with tons of holes that are not found for decades, because nobody except the core team ever looks at the code. You have to actually write tests, do fuzzing, static/memory analysis, etc to find bugs/security holes. Most open source projects don't even have tests.

      Assuming people are always looking for bugs in FOSS projects is like assuming people are always looking for code violations in skyscrapers, just because a lot of people walk around them.

    • > (whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)

      Which is why there have never been multi-year critical security vulnerabilities in FOSS software.... right?

      Sarcasm aside, because of how FOSS software is packaged on Linux we've seen critical security bugs introduced by package maintainers into software that didn't have them!

      2 replies →

Some clarifications since they are unclear in the original report.

- Aditya Pakki (the author who sent the new round of seemingly bogus patches) is not involved in the S&P 2021 research. This means Aditya is likely to have nothing to do with the prior round of patching attempts that led to the S&P 2021 paper.

- According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.

Greg has all reasons to be unhappy since they were unknowingly experimented on and used as lab rats. However, the round of patches that triggered his anger *are very likely* to have nothing to do with the three intentionally incorrect patch attempts leading to the paper. Many people on HN do not seem to know this.

[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

  • Aditya's advisor [1] is one of the co-authors of the paper. He at least knew about this work and was very likely involved with it.

    [1] https://adityapakki.github.io/assets/files/aditya_cv.pdf

    • There is no doubt that Kangjie is involved in Aditya's research work, which leads to bogus patches sent to Linux devs. However, based on my understanding of how CS research groups usually function, I do not think Kangjie knew the exact patches that Aditya sent out. In this specific case, I feel Aditya is more likely the one to blame: He should have examined these automatically generated patches more carefully before sending them in for reviewing.

      6 replies →

  • Aditya's story about the new patches is that he was writing a static analysis tool and was testing it by... submitting PRs to the Linux kernel? He's either exploiting the Linux maintainers to test his new tool, or that story's bullshit. Even taking his story at face value is justification to at least ban him personally IMO.

    • Sounds like these commits aren't related to that paper, they're related to the next paper he's working on, and the next one is making the same category error about human subjects in his study.

  • This is Aditya Pakki's webiste:

    https://adityapakki.github.io/

    In this "About" page:

    https://adityapakki.github.io/about/

    he claims "Hi there! My name is Aditya and I’m a second year Ph.D student in Computer Science & Engineering at the University of Minnesota. My research interests are in the areas of computer security, operating systems, and machine learning. I’m fortunate to be advised by Prof. Kangjie Lu."

    so he in no uncertain terms is claiming that he is being advised in his research by Kangjie Lu. So it's incorrect to say his patches have nothing to do with the paper.

    • I would encourage you not to post people's contact information publicly, specially in a thread as volatile as this one. Writing "He claims in his personal website" would bring the point across fine.

      This being the internet, I'm sure the guy is getting plenty of hate mail as it is. No need to make it worse.

      2 replies →

    • > So it's incorrect to say his patches have nothing to do with the paper.

      Professors usually work on multiple projects, which involve different grad students, at the same time. Aditya Pakki could be working on a different project with Kangjie Lu, and not be involved with the problematic paper.

      3 replies →

  • > S&P 2021 paper did not introduce any bugs into Linux kernel.

    I used to work as an auditor. We were expected to conduct our audits to neither expect nor not expect instances of impropriety to exist. However, once we had grounds to suspect malfeasance, we were "on alert", and conduct tests accordingly.

    This is a good principle that could be applied here. We could bat backwards and forwards about whether the other submissions were bogus, but the presumption must now be one of guilt rather than innocence.

    Personally, I would have been furious and said, in no uncertain terms, that the university keep a low profile and STFU lest I be sufficiently provoked to taking actions that lead to someone's balls being handed to me on a plate.

    • What sort of lawsuit might they bring against a university whose researchers deliberately inserted malicious code into software that literally runs a good portion of the world?

      I'm no lawyer, but it seems like there'd be something actionable.

      On a side note, this brings into question any research written by any of the participating authors, ever. No more presumption of good faith.

      2 replies →

  • > According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.

    Except that at least one of those three, did [0]. The author is incorrect that none of their attempts became git commits. Whatever process that they used to "check different versions of Linux and further confirmed that none of the incorrect patches was adopted" was insufficient.

    [0] https://lore.kernel.org/patchwork/patch/1062098/

    • > The author is incorrect that none of their attempts became git commits

      That doesn't appear to be one of the three patches from the "hypocrite commits" paper, which were reportedly submitted from pseudononymous gmail addresses. There are hundreds of other patches from UMN, many from Pakki[0], and some of those did contain bugs or were invalid[1], but there's currently no hard evidence that Pakki was deliberately making bad-faith commits--just the association of his advisor being one of the authors of the "hypocrite" paper.

      [0] https://github.com/torvalds/linux/commits?author=pakki001@um...

      [1] Including his most recent that was successfully applied: https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...

  • But Kanjie Lu, Pakki’s advisor, was one of the authors. The claim that “ You, and your group, have publicly admitted to sending known-buggy patches” may not be totally accurate (or it might be—Pakki could be on other papers I’m not aware of), but it’s not totally inaccurate either. Most academic work is variations on a theme, so it’s reasonable to be suspect of things from Lu’s group.

    • As Greg KH notes, he has no time to deal with such BS, when suggested to write a formal complain. He has no time to play detectives: you are involved in a group that does BS and this smell like BS again, banned.

      Unfair? Maybe: complain to your advisor.

  • It shouldn’t be up to the victim to sort that out. The only thing that could perhaps have changed here is for the university wide ban to have been announced earlier. Perhaps the kernel devs assumed that no one would be so shameless as to continue to send students back to someone they had already abused.

    • The person in power here is Greg KH. It seems like he can accept/reject/ban anyone for any reason with little recourse for the counter-party. I'm willing to withhold judgement on these allegations until the truth comes out. Seems like many here want retribution before any investigation.

      2 replies →

  • There's only one way the kernel dev team can afford to look at this: A bad actor tried to submit malicious code to the kernel using accounts on the U of M campus. They can't afford to assume that the researchers weren't malicious, because they didn't follow the standards of security research and did not lay out rules of engagement for the pentest. Because that trust was violated, and because nobody in the research team made the effort to contact the appropriate members of the dev team (in this case, they really shoulda taken it to Torvalds), the kernel dev team can't risk taking another patch from U of M because it might have hidden vulns in it. For all we know, Aditya Pakki is a pseudonym. For all we know, the researchers broke into Aditya's email account as part of their experiment--they've already shown that they have a habit of ignoring best practices in infosec and 'forgetting' to ask permission before conducting a pentest.

    • I agree, the kernel team shouldn't make decisions based on the intents to submit such patches.

      Like you can go to any government building with a threat of bombs but claiming it is only an experiment to find security loophole.

  • From his message, the ones that triggered his anger were patches he believed to be obviously useless and therefore either incompetently submitted or submitted as some other form experimentation. After the intentionally incorrect patches, he could no longer allow the presumption of good faith.

  • It doesn't matter. I think this is totally appropriate. A group of students are submitting purposely buggy patches? It isn't the kernels team to sift through and distinguish they come down and nuke the entire university. This sends a message to any other University thinking of a similar stunt you try this bull hockey you and your entire university are going to get caught in the blast radius.

    In short "f** around, find out"

    • On the plus side, I guess they get a hell of a result for that research paper they were working on.

      "We sought to probe vulnerabilities of the open-source public-development process, and our results include a methodology for getting an entire university's email domain banned from contributing."

      3 replies →

    • I seriously doubt this policy would have been adopted if other unrelated groups at the same university were submitting constructive patches.

  • I read through that clarification doc. I don't like their experiment but I have to admit their patch submission process is responsible (after receiving a "looks good" for the bad patch, point out the flaw in the patch, give the correct fix and make sure the bad patch doesn't get into the tree).

This isn't friendly pen-testing in a community, this is an attack on critical infrastructure using a university as cover. The foundation should sue the responsible profs personally and seek criminal prosecution. I remember a bunch of U.S. contractors said they did the same thing to one of the openbsd vpn library projects about 15 years ago as well.

What this professor is proving out is that open source and (likely, other) high trust networks cannot survive really mendacious participants, but perhaps by mistake, he's showing how important it is to make very harsh and public examples of said actors and their mendacity.

I wonder if some of these or other bug contributors have also complained that the culture of the project governance is too aggressive, that project leads can create an unsafe environment, and discourage people from contributing? If counter-intelligence prosecutors pull on this thread, I have no doubt it will lead to unravelling a much broader effort.

  • I am not knowledgeable enough to know if this intent is provable, but if someone can frame the issue appropriately, it feels like it could be good to report this to the FBI tip line so it is at least on their radar.

  • > The foundation should sue the responsible profs personally and seek criminal prosecution.

    This is overkill and uncalled for.

    • Organizing an effort, with a written mandate, to knowingly introduce kernel vulnerabilities, through deception, that will spread downstream into other Linux distributions, likely including firmware images, which may not be patched or reverted for months or years - does not warrant a criminal investigation?

      The foundation should use recourse to the law to signal they are handling it, if only to prevent these profs from being mobbed.

      9 replies →

Here's a clarification from the Researchers over at UMN[1].

They claim that none of the Bogus patches were merged to the Stable code line :

>Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.

[1] : https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

[2] : https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

  • The response makes the researchers seem clueless, arrogant, or both - are they really surprised that kernel maintainers would get pissed off at someone deliberately wasting their time?

    From the post:

      * Does this project waste certain efforts of maintainers?
      Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study. However, to minimize the wasted time, (1) we made the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we tried hard to find three real bugs, and the patches ultimately contributed to fixing them
    

    "Yes, this wastes maintainers time, but we decided we didn't care."

    • Fascinating that the research was judged not to involve human subjects....

      As someone not part of academia, how could this research be judged to not involve people? It _seems_ obvious to me that the entire premise is based around tricking/deceiving the kernel maintainers.

      5 replies →

    • This is an indignant rebuttal, not an apology.

      No one says "wasted their precious time" in a sincere apology. The word 'precious' here is exclusively used for sarcasm in the context of an apology, as it does not represent a specific technical term such as might appear in a gemology apology.

      11 replies →

    • Or more charitably: "Yes, this spent some maintainers time, but only a small amount and it resulted in bugfixes, which is par for the course of contributing to linux"

    • Your honor, I tried to find any solution for testing this new poison without poisoning a bunch of people, but I carefully considered it and I couldn't find any, so I went ahead and secretly poisoned them. Clearly, I am innocent! Though I sincerely apologize for any inconvenience caused.

    • > We had carefully considered this issue, but could not figure out a better solution in this study.

      Couldn't figure out that "not doing it" was an option apparently.

  • In the end, the damage has been done and the Linux developers are now going back and removing all patches from any user with a @umn.edu email.

    Not sure how the researchers didn't see how this would backfire, but it's a hopeless misuse of their time. I feel really bad for the developers who now have to spend their time fixing shit that shouldn't even be there, just because someone wanted to write a paper and their peers didn't see any problems either. How broken is academia really?

    • This, in of itself, is a finding. The researchers will justify their research with "we were banned which is a possible outcome of this kind of research..." I find this disingenuous. When a community of open source contributors is partially built on trust, then violators can and will be banned.

      The researchers should have approached the maintainers got get buy in, and setup a methodology where a maintainer would not interfere until a code merge was immanent, and just play referee in the mean time.

      1 reply →

    • I feel the same way. People don't understand how it is difficult to be a maintainer. This is very selfish behaviour. Appreciate Greg's strong stance against it.

  • > I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.

    That's because those are two separate incidents. The study which resulted in 3 patches was completed some time last year, but this new round of patches is something else.

    It's not clear whether the patches are coming from the same professor/group, but it seems like the author of these bogus patches is a Phd student working with the professor who conducted that study last year. So there is at least one connection.

    EDIT: also, those 3 patches were supposedly submitted using a fake email address according to the "clarification" document released after the paper was published. So they probably didn't use a @umn.edu email at all.

  • The main issue here is that it wastes the time of the reviewers and they did not address it in their reply.

  • It's disrespectful to people who are contributing their personal time while working for free on open source projects.

    With more than 60% of all acedemic publications not being reproducible [1], one would think academia has better things to do than wasting other people's time.

    [1] https://en.wikipedia.org/wiki/Replication_crisis

  • I wonder why they didn't just ask in advance. Something like 'we would like to test your review process over the next 6 months and will inform you before a critical patch hits the users', might have been a win-win scenario.

  • It seems to me like the other patches were submitted in good faith, but that the maintainer no longer trusts them because of the other bad commits.

The University of Minnesota's Department of Computer Science and Engineering released a statement [0] and "suspended this line of research".

[0] https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...

  • Not sure how this university is run but this doesn't sound plausible to me.

    >... learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel

    And this sounds like mainly a lot of damage control is going to happen.

    >We will report our findings back to the community as soon as practical.

    • Why does it sound implausible? In any uni I've interacted with, profs did pretty much their own thing and without a reason very little attention is paid to how they do it (or even what they do).

In the follow up chain it was stated that some of their patches made it to stable: https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...

Can someone who's more invested into kernel devel find them and analyze their impact? That sounds pretty interesting to me.

Edit: This is the patch reverting all commits from that mail domain: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

Edit 2: Now that the first responses to the reversion are trickling in, some merged patched were indeed discovered to be malicious, like the following. Most of them seem to be fine though or at least non malicious. https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...

Let me play devil's advocate here. Such pen-testing is absolutely essential to the safety of our tech ecosystem. Countries like Russia, China and USA are without a doubt, doing exactly the same thing that this UMN professor is doing. Except that instead of writing a paper about it, they are going to abuse the vulnerabilities for their own nefarious purposes.

Conducting such pen-tests, and then publishing the results openly, helps raise awareness about the need to assume-bad-faith in all OSS contributions. If some random grad student was able to successfully inject 4 vulnerabilities before finally getting caught, I shudder to think how many vulnerabilities were successfully injected, and hidden, by various nation-states. In order to better protect ourselves from cyberwarfare, we need to be far more vigilant in maintaining OSS.

Ideally, such research projects should gain prior approval from the project maintainers. But even though they didn't, this paper is still a net-positive contribution to society, by highlighting the need to take security more seriously when accepting OSS patches.

  • The world works better without everyone being untrusting of everyone else, and this is especially true of large collaborative projects. The same goes in science - it has been shown over and over again that if researchers submit deliberately fraudulent work, it is unlikely to be picked up by peer review. Instead, it is simply deemed as fraud, and researchers that do that face heavy consequences, including jail time.

    Without trust, these projects will fail. Research has shown that even in the presence of untrustworthy actors, trusting is usually still beneficial [1][2]. Instead, trust until you have reason to believe you shouldn't has been found to be an optimal strategy [2], so G K-H is responding exactly appropriately here. The linux community trusted them until they didn't, and now they are unlikely to trust them going forward.

    [1] https://www.nature.com/articles/s41598-019-55384-4#Sec13 [2] https://medium.com/greater-than-experience-design/game-theor...

    • If an open-source project adopt a trusting attitude, nation-states can and will take advantage of this, in order to inject dangerous vulnerabilities. Telling University professors to not pen-test OSS does not stop nation-states from doing the same thing secretly. It just sweeps the problem under the rug.

      Would I prefer to live in a world where everyone behaved in a trustworthy manner in OSS? Absolutely. But that is not the world we live in. A professor highlighting this fact, and forcing people to realize the dangers in trusting people, does more good than harm.

      --------------

      On a non-serious and humorous note, this episode reminds me of the Sokal Hoax. Most techies/scientists I've met were very appreciative of this hoax, even though it wasn't conducted with pre-approval from the subjects. It is interesting to see the shoe on the other foot

      https://en.wikipedia.org/wiki/Sokal_affair

  • Pen testing is essential, yes, but there are correct and incorrect ways to do it. This was the latter. In fact attempts like this harm the entire industry because it reflects poorly on researchers/white hat hackers who are doing the right thing. For example, making sure your testing is non-destructive is the bare minimum, as is promptly informing the affected party when you find an exploit. These folks did neither.

    • Unrelated to the Linux kernel, there is a good example of how Mario Heiderich (probably the most knowledgeable person for XSS on the globe) purposefully introduced an XSS vuln into AngularJS through a patch after (!!!) checking it with the relevant authorities and even then it was a close-ish call: https://m.youtube.com/watch?v=wzrojHHyQwc

  • > this paper is still a net-positive contribution to society

    There's claims that one vulnerability got committed and was not reverted by the research group. In fact the research group didn't even notice that it got committed. So I'd argue that this was a net negative to society because it introduced a live security vulnerability into linux.

  • It's always useful to search for, and upvote, a reasonable alternative opinion. Thank you for posting it.

    There are a lot of people reading these discussions who aren't taking 'sides' but trying to think about the subject. Looking at different angles helps with thinking.

  • We already know that good faith can be abused, it's practically implied in the phrase itself. There is nothing of value to be learned from this "research".

    • This research implies that the linux team should not be operating on good faith.

      A software as critical as linux should not be this easily compromised by a bunch of grads..

      It's one of the core technologies of our computing.

      Having a discussion around the ethics of this is great but it does not detract from the importance of bigger issue.

      1 reply →

  • No, this did not teach anyone anything new except that members of that UMN group are untrustworthy. Nothing else new was learned here at all.

  • Any party caught willinging sabotaging such a prominent open source project would definitely face greater consequences than just a ban.

  • An excellent point, however without prior approval and safety mechanisms, they were absolutely malicious in their acts. Treating them as anything but malicious, even if "for the greater good of OSS" sets a horrible precedent. The road to hell is paved with good intentions is the quote that comes to mind. Minnesota got exactly what they deserve.

From https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...,

> A lot of these have already reached the stable trees.

If the researchers were trying to prove that it is possible to get malicious patches into the kernel, it seems like they succeeded -- at least for an (insignificant?) period of time.

  • I tangentially followed the debacle unfold for a while and this particular thread now has lead to heated debates on some IRC channels I'm on.

    While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.

    Two messages down in the same thread, it more or less culminates with the university e-mail suffix being banned from several kernel mailing lists and associated patches being removed[1], which might be an appropriate response to discourage others from similar stunts "for science".

    [1] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

    • I'm confused. The cited paper contains this prominent section:

      Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch.

      Are you saying that despite this, these malicious commits made it to production?

      Taking the authors at their word, it seems like the biggest ethical consideration here is that of potentially wasting the time of commit reviewers—which isn't nothing by any stretch, but is a far cry from introducing bugs in production.

      Are the authors lying?

      18 replies →

    • > While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.

      I agree. I would say this is kind of a "human process" analog of your typical computer security research, and that this behavior is akin to black hats exploiting a vulnerability. Totally not OK as research, and totally reckless!

      9 replies →

    • I assume that having these go into production could make the authors "hackers" according to law, no?

      Haven't whitehat hackers doing unsolicited pen-testing been prosecuted in the past?

    • If they're public IRC channels, do you mind mentioning them here? I'm trying to find the remnant. :)

  • There’s no research going on here. Everyone knows buggy patches can get into a project. Submitting intentionally bad patches adds nothing beyond grandstanding. They could perform analysis of review/acceptance by looking at past patches that introduced bugs without being the bad actors that they apparently are.

    From FOSDEM 2014, NSA operation ORCHESTRA annual status report. It’s pretty entertaining and illustrates that this is nothing new.

    https://archive.fosdem.org/2014/schedule/event/nsa_operation... https://www.youtube.com/watch?v=3jQoAYRKqhg

    • > They could perform analysis of review/acceptance by looking at past patches that introduced bugs without being the bad actors that they apparently are.

      Very good point.

  • It may be unethical from an academic perspective, but I like that they did this. It shows there is a problem with the review process if it is not catching 100% of this garbage. Actual malicious actors are certainly already doing worse and maybe succeeding.

    In a roundabout way, this researcher has achieved their goal, and I hope they publish their results. Certainly more meaningful than most of the drivel in the academic paper mill.

    • It more shows up a very serious problem with the incentives present in scientific research and a poisonous culture which obviously seems to reward malicious behavior. Science enjoys a lot of freedom and trust from citizens but this trust must not be misused. If some children playing throw fireworks under your car, or mix sugar into the gas tank, just to see how you react, this would have negative community effects, too. Adult scientists should be totally aware of that.

      This will lead in effect to that even valuable contributions from universities will be seen with more suspicion and will be very damaging in the long run.

    • >It shows there is a problem with the review process if it is not catching 100% of this garbage

      What review process catches 100% garbage? It's a mechanism to catch 99% of garbage -- otherwise Linux kernel would have no bugs.

      2 replies →

    • The paper indicates that the goal is to prove that OSS in particular is vulnerable to this attack, but it seems that any software development ecosystem shares the same weaknesses. The choice of an OSS target seems to be one of convenience as the results can be publicly reviewed and this approach probably avoids serious consequences like arrests or lawsuits. In that light, their conclusions are misleading, even if the attack is technically feasible. They might get more credibility if they back off the OSS angle.

      3 replies →

    • > It shows there is a problem with the review process if it is not catching 100% of this garbage.

      Does that add anything new to what we know since the creation of the "obfuscated C contest" in 1984?

    • > It shows there is a problem with the review process if it is not catching 100% of this garbage.

      It shows nothing of the sort. No review process is 100% foolproof, and opensource means that everything can be audited if it is important to you.

      The other option is closed source everything and I can guarentee that review processes let stuff through, even if its only "to meet deadlines" and you will unlikely be able to audit it.

    • Unable to follow the kernel thread (stuck in an age between twitter and newsgroups, sorry), but...

      did these "researchers" in any way demonstrate that they were going to come clean about what they had done before their "research" made to anywhere close to release/GA?

    • By your logic, you allow recording people without their consent, experimenting on PTSD by inducing PTSD without people consent, or medical experimentation without the subject consent.

      Try to introduce yourself in the White House and when you get caught tell them "I was just testing your security procedures".

  • I think that the patches that hit stable were actually OK, based on the apparent intent to 'test' the maintainers and notify them of the bug and submit the valid patch after, but the thought process from the maintainers is:

    "if they are attempting to test us by first submitting malicious patches as an experiment, we can't accept what we have accepted as not being malicious and so it's safer to remove them than to keep them".

    my 2c.

    • The earlier patches could in theory be OK, but they also might combine with other or later patches which introduce bugs more stealthily. Bugs can be very subtle.

      Obviously, trust should not be the only thing that maintainers rely on, but it is a social endeavour and trust always matters in such endeavors. Doing business with people you can't trust makes no sense. Without trust I agree fully that it is not worth the maintainer's time to accept anything from such people, or from that university.

      And the fact that one can do damage with malicious code is nothing new at all. It is well known and nothing new that bad code can ultimately kill people. It is also more than obvious that I can ring the door of my neighbor, ask him or her for a cup of sugar, and blow a hammer over their head. Or people can go to a school and shoot children. Does anyone in his right mind has to do such damage in order to prove something? No. Does it prove anything? No. Does the fact that some people do things like that "prove" that society is wrong and trust and collaboration is wrong? What an idiocy, of course not!

  • It is worrying to consider that in all likelihood, some people with actually malicious motives, rather than clinical academic curiosity, have probably introduced introduced serious security bugs into popular FOSS projects such as the Linux kernel.

    Before this study came out, I'm pretty sure there were already known examples of this happening, and it would have been reasonable to assume that some such vulnerabilities existed. But now we have even more reason to worry, given that they succeeded doing this multiple times as a two person team without real institutional backing. Imagine what a state-level actor could do.

    • The same can be said about any software, really. It’s all too easy for a single malicious dev to introduce security bugs in pretty much any project they are involved.

  • I wonder whether they broke any laws intentionally putting bugs in software that is critical to national security.

Greg does not joke around: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

    [PATCH 000/190] Revertion of all of the umn.edu commits

  • Seriously.... I am undecided one way or another on reverting everything, but I am happy that someone is looking closely at this.

    One thing I hope they have considered is a possible intent to get a particular important patch from umn.edu reverted to reintroduce a kernel bug. Discrediting all commits from the organization could inadvertently lead to the reintroduction of legacy exploits.

  • How does the kernel still run after reverting like this?

    • I was wondering the same thing. From the Patch itself:

      > This patchset has the "easy" reverts, there are 68 remaining ones that need to be manually reviewed. Some of them are not able to be reverted as they already have been reverted, or fixed up with follow-on patches as they were determined to be invalid. Proof that these submissions were almost universally wrong.

    • In all likelihood, it'll run just fine.

      Skimming through subject lines of 190 commits being reverted here, every single one of them is along the lines of "add refcount/NULL/etc check and conditionally do (or do not) de-allocate memory before error-path return". I.e. worst case - this will reintroduce some rare memory leak or memory lifecycle bug.

      Also, all of patches in question are in drivers. So depending on hardware used, any given system's user is likely to only have to worry about 2-3, maybe 5 of the patches, not all 190.

    • The answer is somewhere between "it's 'only' 190 patches" and "Greg posting this patch series doesn't mean it's applied to stable yet"

  • >Some of them are not able to be reverted as they already have been reverted, or fixed up with follow-on patches as they were determined to be invalid. Proof that these submissions were almost universally wrong.

How does something like this get through IRB - I always felt IRB was over the top - and then they approve something like this?

UMN looks pretty shoddy - the response from the researcher saying these were automated by a tool looks like a potential lie.

  • They obtained an "IRB-exempt letter" because their IRB found that this was not human research. It's quite likely that the IRB made this finding based on a misrepresentation of the research during that initial stage; once they had an exemption letter the IRB wouldn't be looking any closer.

    • Not necessarily. And the conflation of IRB-exemption and not human subjects research is not exactly correct.[0]

      Each institution, and each IRB is made up of people and a set of policies. One does not have to meaningfully misrepresent things to IRBs for them to be misunderstood. Further, exempt from IRB review and 'not human subjects research' are not actually the same thing. I've run into this problem personally - IRB declines to review the research plan because it does not meet their definition of human subjects research, however the journal will not accept the article without IRB review. Catch-22.

      Further, research that involves deception is also considered a perfectly valid form of research in certain fields (e.g., Psychology). The IRB may not have responded simply because they see the complaint as invalid. Their mandate is protecting human beings from harm, not random individuals who email them from annoyance. They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous). Someone not liking a study is not research misconduct and if the IRB determined within their processes that it isn't even human subjects research, there isn't a lot they can do here.

      I suspect that this is just one of those disconnects that happens when people talk across disciplines. no misrepresentation was needed, all that was needed was for someone reviewing this, who's background is medicine and not CS, to not understand the organizational and human processes behind submitting a software 'patch'.

      The follow up behavior...not great...but the start of this could be a serious of individually rational actions that combine into something problematic because they were not holistically evaluated in context.

      [0] https://oprs.usc.edu/irb-review/types-of-irb-review/

      10 replies →

    • That's what it seemed like to me as well. Based on their research paper, they did not mention the individuals they interacted with at all.

      They also lied in the paper about their methodology - claiming that once their code was accepted, they told the maintainers it should not be included. In reality, several of their bad commits made it into the stable branch.

      6 replies →

    • My understanding is that it's pretty common for CS departments to get IRB exemption even when human participants are tangentially involved in studies.

      10 replies →

  • > the response from the researcher saying these were automated by a tool looks like a potential lie.

    To be clear, this is unethical research.

    But I read the paper, and these patches were probably automatically generated by a tool (or perhaps guided by a tool, and filled in concretely by a human): their analyses boil down to a very simple LLVM pass that just checks for pointer dereferences and inserts calls to functions that are identified as performing frees/deallocations before those dereferences. Page 9 and onwards of the paper[1] explains it in reasonable detail.

    [1]: https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...

    • Thanks for this, very helpful.

      Could they have submitted patches to fix the problems based on same tooling or was that not possible (I am not close to kernel development flow)?

      1 reply →

  • I have a feeling that methods of patching the Linux kernel is a concept most members of IRB boards wouldn't understand at all. It's pretty far outside their wheelhouse.

  • IRB is useless. They don't use much context, including if the speediness of IRB approval would save lives. You could make a reasonable argument that IRB has contributed to millions of preventable deaths at this point, with COV alone it's at least dozens of thousands if not far more.

    • This is the unfortunate attitude that leads to bad research and reduces trust in science. If you think IRB has contributed to deaths you should make a case, because right now you sound like a blowhard.

    • By COV do you mean Covid? It sounds like you're alluding to the argument that if they'd only let us test potential vaccines on humans right away then we would have had a vaccine faster. I disagree that that's a foregone conclusion, and you certainly need a strong argument or evidence to make such a claim.

It would be fascinating to see the ethics committee exemption. I sense there was none.

Or is this kind of experiment deemed fair game? Red vs blue team kind of thing? Penetration testing.

But if it was me in this situation, I'd ban them for ethics violation as well. Acting like a Evil doer means you might get caught... and punished. I found the email about cease and desist particularly bad behavior. If that student was lying then that university will have to take real action. Reputation damage and all that. Surely a academic reprimand.

I'm sure there's plenty of drama and context we don't know about.

  • I didn't read this bit: "The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter"

    Um. Ok.

    • Some people are questioning whether banning the entire university is an appropriate response. It sounds to me like there are systemic institutional issues that they need to address, and perhaps banning them until they can sort those out wouldn't be an entirely unreasonable thing to do.

      3 replies →

    • How does an IRB usually work? Is it the same group of people reviewing all proposals for the entire university? Or are there subject-matter experts (and hopefully lawyers) tapped to review proposals in their specific domain? Applying “ethics” to a proposal is meaningless without understanding not just how they plan to implement it but how it could be implemented.

      1 reply →

  • I'm gonna guess the committee didn't realize the "patch process" was a manual review of each patches. The way it's worded in the paper you'd think they were testing some sort of integration testing or something.

  • The ethics committee issued a post-hoc exemption after paper was published.

    • Wow. That is a flagrant violation of research ethics by everyone involved. UMN needs to halt anything even close to human subjects research until they get their IRB back under control, who knows what else is going on on campus that has not received prior approval. Utter disaster.

      3 replies →

  • Institutional review boards are notorious for making sure that all of the i's are dotted and the t's are crossed on the myriad of forms they require, but without actually understanding the nature of the research they are approving.

I don't think there have been any recent comments from anyone at U.Mn. So, back when the original research (happened last year) the following clarification was offered by Qiushi Wu and Kangjie Lu which atleast paints their research in somewhat better light: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

That said the current incident seems to have gone beyond the limits of that one and is a new incident. I just thought it would be fair to include their "side"

  • From their explanation:

    (3). We send the incorrect minor patches to the Linux community through email to seek their feedback.

    (4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

    ------------------------

    But this shows a distinct lack of understanding of the problem:

    > This is not ok, it is wasting our time, and we will have to report this,

    > AGAIN, to your university...

    ------------------------

    You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

    1. The voluntary consent of the human subject is absolutely essential.

    • Holy cow!! I'm a researcher and don't understand how they thought it would be okay to not do an IRB, and how an IRB would not catch this. The linked PDF by the parent post is quite illustrative. The first few paras seem to be downplaying the severity of what they did (did not introduce actual bugs into the kernel) but that is not the bloody problem. They experimented on people (maintainers) without consent and wasted their time (maybe other effects too .. e.g. making them vary of future commits from universities)! I'm appalled.

      11 replies →

    • > You do not experiment on people without their consent.

      Exactly this. Research involving human participants is supposed to have been approved by the University's Institutional Review Board; the kernel developers can complain to it: https://research.umn.edu/units/irb/about-us/contact-us

      It would be interesting to see what these researches told the IRB they were doing (if they bothered).

      Edited to add: From the link in GP: "The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained)"

      Okay so this IRB needs to be educated about this. Probably someone in the kernel team should draft an open letter to them and get everyone to sign it (rather than everyone spamming the IRB contact form)

      T

      7 replies →

    • In any university I've ever been to, this would be a gross violation of ethics with very unpleasant consequences. Informed consent is crucial when conducting experiments.

      If this behaviour is tolerated by the University of Minnesota (and it appears to be so) then I suppose that's another institution on my list of unreliable research.

      I do wonder what the legal consequences are. Would knowingly and willfully introducing bad code constitute a form of vandalism?

      7 replies →

    • > You do not experiment on people without their consent.

      Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?

      From a common sense standpoint, it seems to me this is more about medical experiments. Yesterday I put some of my kids toys away without telling them to see if they’d notice and still play with them. I don’t think I need IRB approval.

      8 replies →

    • It does seem rather unethical, but I must admit that I find the topic very interesting. They should definitely have asked for consent before starting with the "attack", but if they did manage to land security vulnerabilities despite the review process it's a very worrying result. And as far as I understand they did manage to do just that?

      I think it shows that this type of study might well be needed, it just needs to be done better and with the consent of the maintainers.

      19 replies →

    • They apparently didn't consider this "human research"

      As I understand it, any "experiment" involving other people that weren't explicitly informed of the experiment before hand needs to be a lot more carefully considered than what they did here.

      1 reply →

    • In this post they say the patches come from a static analyser and they accuse the other person of slander for their criticisms

      > I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

      > These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.

      ( https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... )

      How does that fit in with your explanation?

      5 replies →

    • > You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

      > 1. The voluntary consent of the human subject is absolutely essential.

      The Nuremberg code is explicitly about medical research, so it doesn't apply here. More generally, I think that the magnitude of the intervention is also relevant, and that an absolutist demand for informed consent in all - including the most trivial - cases is quite silly.

      Now, in this specific case I would agree that wasting people's time is an intervention that's big enough to warrant some scrutiny, but the black-and-white way of some people to phrase this really irks me.

      PS: I think people in these kinds of debate tend to talk past one another, so let me try to illustrate where I'm coming from with an experiment I came across recently:

      To study how the amount of tips waiters get changes in various circumstances, some psychologists conducted an experiment where the waiter would randomly either give the guests some chocolate with the bill, or not (control condition)[0] This is, of course, perfectly innocuous, but an absolutist claim about research ethics ("You do not experiment on people without their consent.") would make research like this impossible without any benefit.

      [0] https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1559-1816...

    • But this is all a lie. If you read the linked thread you till see that they refused to admit to their experiment and even sent a new, differently broken patch.

    • There is sometimes an exception for things like interviews when n is only a couple of people. This was clearly unethical and it’s certain that at least some of those involved knew that. It’s common knowledge universities.

    • I'm confused - how is this an experiment on humans? Which humans? As far as I can tell, this has nothing to do with humans, and everything to do with the open-source review process - and if one thinks that it counts as a human experiment because humans are involved, wouldn't that logic apply equally to pentesting?

      For that matter, what's the difference between this and pentesting?

      2 replies →

    • > indicating “looks good”

      I wonder how many zero days have been included already, for example by nation state actors...

    • You could argue that they are doing the maintainers a favor. Bad actors could exploit this, and the researchers are showing that maintainers are not paying enough attention.

      If I were at the receiving end, I’d think checking a patch multiple times before accepting it.

      2 replies →

    • >You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

      >1. The voluntary consent of the human subject is absolutely essential.

      Does this also apply to scrapping people's data?

    • > You do not experiment on people without their consent.

      By this logic eg. resume callback studies aiming to study bias in the workforce would be impossible.

    • In the last year when it came to experimental Covid-19 projections, modeling and population-wide recommendations from major academic centers, the IRB's were silent and academics did essentially whatever they wanted, regardless of "consent" from the populations that were the subjects of their speculative hypotheses.

    • > You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

      > 1. The voluntary consent of the human subject is absolutely essential.

      Which is rather useless, as for many experiments to work, participants have to either be lied to, or kept in the dark as to the nature of the experiment, so whatever “consent” they give is not informed consent. They simply consent to “participate in an experiment” without being informed as to the qualities thereof so that they truly know what they are signing up for.

      Of course, it's quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations, and they never consented to that, but the hospitals and physicians that partake in that are not sanctioned as it's “tradition”.

      Know well that so-called “human rights” have always been, and shall always be, a show of air that lack substance.

      2 replies →

  • Their first suggestion to the process is pure gold:"OSS projects would be suggested to update the code of conduct, something like “By submitting the patch, I agree to not intend to introduce bugs”"

    Like somebody picking your locks, and suggesting, 'to stop this one approach would be to post a sign "do not pick"'

  • The fact that they took the feedback last time and decided "lets do more of that" is already a big red flag.

    • >>>On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

      from https://www-users.cs.umn.edu/~kjlu/

      If the original research results in a paper and IEEE conference presentation, why not? There's no professional consequences for this conduct, apparently.

      9 replies →

  • This does paint there side better, but it also makes me wonder if they're being wrongly accused of this current round of patches? That clarification says that they only submitted 3 patches, and that they used a random email address when doing so (so presumably no @umn.edu).

    These ~200 patches from UMN being reverted might have nothing to do with these researchers at all.

    Hopefully someone from the university clarifies what's happening soon before the angry mob tries to eat the wrong people.

    • The study you’re quoting was a previous study by the same research group, from last year.

This feels like the kind of thing that "white hat" hackers have been doing forever. UMN may have introduced useful knowledge into the world in the same way some random hacker is potentially "helping" a company by pointing out that they've left a security hole exposed in their system.

With that said, kernel developers and companies with servers on the internet are busy doing work that's important to them. This sort of thing is always an unwelcome distraction.

And, if my neighbors walks in my door at 3 a.m. to let me know I left it unlocked, I'm going to treat them the same way UMN is getting treated in this situation. Or worse.

  • Your analogy doesn't work. A true "white hat" hacker would hack a system to expose a security vulnerability, then immediately inform the owners of the system, all without using their unintended system access for anything malicious. In this case, the "researchers" submitted bogus patches, got them accepted and merged, then said nothing, and pushed back against accusations that they've been malicious, all for personal gain.

    EDIT: Also, even if you do no harm and immediately inform your victim, this sort of stuff might rather be categorized as grey-hat. Maybe a "true" white-hat would only hack a system with explicit consent from the owner. These terms are fuzzy. But my point is, attacking a system for personal gain without notifying your victim afterwards and leaving behind malicious code is certainly not white-hat by any definition.

    • You make a fair point. I'm just saying that, while it might ultimately be interesting and useful to someone or even lots of someones, it remains a crappy thing to do and the consequences that UMN is facing as a result is predictable and makes perfect sense to me, a guy who has had to rebuild a few servers and databases over the years because of intrusions and a couple of those have come with messages about how we should consult with the intruder who had less-than-helpfully found some security issue for us.

  • Hacking on software is one thing. Running experiments on people is something completely different.

    In order to do this ethically, all that's needed is respect towards our fellow human beings. This means informing them about the nature of the research, the benefits of the collected data, the risks involved for test subjects as well as asking for their consent and permission to be researched on. Once researchers demonstrate this respect, they're likely to find that a surprising number of people will allow them to perform their research.

    We all hate it when big tech tracks our every move and draws all kinds of profitable conclusions based on that data at our expense. We hate it so much we deploy active countermeasures against it. It's fundamentally the same issue.

  • A modification of your metaphor would also have a reputed institution in your life enter your apartment on the credibility of that institution. It is not surprising when that institution has its credibility downranked.

The problem here is really that they’re wasting time of the maintainers without their approval. Any ethics board would require prior consent to this. It wouldn’t even be hard to do.

  • > The problem here is really that they’re wasting time of the maintainers without their approval.

    Not only that, but they are also doing experiments on a community of people which is against their interest and also could be harmful by creating mistrust. Trust is a big issue, without it it is almost impossible for people to work meaningfully together.

    • Yeah this actually seems more like sociological research except since it’s in the comp sci department the investigators don’t seem to be trained in acceptable (and legal) standards of conducting such research on human subjects. You definitely need prior consent when doing this sort of thing. Ideally this would be escalated to a research ethics committee at UMN because these researchers need to be trained in acceptable practices when dealing with human subjects. So to me it makes sense the subjects “opted out” and escalated to the university.

      3 replies →

    • Besides that, if their "research" patch gets into a release, it could potentially put thousands or millions of users at risk.

  • 1) They identified vulnerabilities with a process 2) They contributed the correct code after showing the maintainer the security vulnerability they missed. 3) Getting the consent of the people behind the process would invalidate the results.

    • Go hack a random organization without a vulnerability disclosure program in place and see how much goodwill you have. There is a very established best practice in how to do responsible disclosure and this is far from it.

      24 replies →

    • > 3) Getting the consent of the people behind the process would invalidate the results.

      This has not been a valid excuse since the 1950s. Scientists are not allowed to ignore basic ethics because they want to discover something. Deliberately introducing bugs into any open source project is plainly unethical; doing so in the Linux kernel is borderline malicious.

      12 replies →

    • You're right, and it is depressing how negative the reaction has been here. This work is the technical equivalent of "Sokalling", and it is a good and necessary thing.

      The thing that people should be upset about is that such an important open source project so easily accepts patches which introduce security vulnerabilities. Forget the researchers for a moment - if it is this easy, you can be certain that malicious actors are also doing it. The only difference is that they are not then disclosing that they have done so!

      The Linux maintainers should be grateful that researchers are doing this, and researchers should be doing it to every significant open source project.

      3 replies →

I hope USENIX et al ban this student / professor / school / university associated with this work from submitting anything to any of their conferences for 10 years.

This was his clarification https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

...in which they have the nerve to say that this is not considered "human research". It most definitely is, given that their attack vector is the same one many people would be keen on using for submitting legitimate requests for getting involved.

If anything, this "research" highlights the notion that coding is but a small proportion of programming and delivery of a product, feature, or bugfix from start-to-finish is a much bigger job than many people like to admit to themselves or others.

Reading this email exchange, I worry about the state of our education system, including computer science departments. Instead of making coherent arguments, this PhD student speaks about "preconceived biases". I loved Greg's response. The spirit of Linus lives within the Kernel! These UMN people should be nowhere near the kernel. I guess they got the answer to their research on what would happen if you keep submitting stealth malicious patches to the kernel: you will get found out and banned. Made my day.

  • The tone of Pakki's reply made me cringe:

    > Attitude that is not only unwelcome but also intimidating to newbies and non experts

    Between that and the "Clarifications" document suggesting they handle it by updating their Code of Conduct, they're clearly trying really hard to frame all of this as some kind of toxic culture in kernel development. That's a hideous defense. It's like a bad MMA fight where one fighter refuses to stand up because he insists on keeping it a ground fight. Maybe it works sometimes, but it's shameful.

The researched yielded non surprising results: Stealthy patches without a proper smoke screen to provide a veil of legitimacy will cause the the purveyor of the patches to become black listed....DUH!

I still don't get the point of this "research".

You're just testing the review ability of particular Linux kernel maintainers at a particular point in time. How does that generalize to the extent needed for it to be valid research on open source software development in general?

You would need to run this "experiment" hundreds or thousands of times across most major open source projects.

  • >the point of this "research".

    I think it's mostly "finger pointing": you need one exception to break a rule. If the rule is "open source is more secure than closed source because community/auditing/etc.", now with a paper demonstrating that this rule is not always true you can write a nice Medium article for your closed-source product, quoting said paper, claiming that your closed-source product is more secure than the open competitor.

    • I don't think this is correct. The authors have contributed a large number of legitimate bugfixes to the kernel. I think they really did believe that process changes can make the kernel safer and that by doing this research they can encourage that change and make the community better.

      They were grossly wrong, of course. The work is extremely unethical. But I don't believe that their other actions are consistent with a "we hate OSS and want to prove it is bad" ethos.

      1 reply →

  • The Linux kernel is one of the largest open-source projects in existence, so my guess is that they were aiming to show that "because the Linux kernel review process doesn't protect against these attacks, most open-source project will also be vulnerable" - "the best can't stop it, so neither will the rest".

    • But we have always known that someone with sufficient cleverness may be able to slip vulnerabilities past reviewers of whatever project.

      Exactly how clever? That varies from reviewer to reviewer.

      There will be large projects, with many people that review the code, which will not catch sufficiently clever vulnerabilities. There will be small projects with a single maintainer that will catch just about anything.

      There is a spectrum. Without conducting a wide-scale (and unethical) survey with a carefully calibrated scale of cleverness for vulnerabilities, I don't see how this is useful research.

      2 replies →

Research without ethics is research without value.

Unbelievable that this could have passed ethics review, so I'd bet it was never reviewed. Big black eye for University of Minnesota. Imagine if you are another doctoral student is CS/EE and this tool has ruined your ability to participate in Linux.

  • > Research without ethics is research without value.

    didn't we learn a lot from nazi/japanese experiments from ww2?

    • From my understanding - no, actually. We learnt a bit, on the very extreme scale of things, but most of the "experiments" were not conducted in any kind of way that would yield usable data.

      1 reply →

    • We did. Often we wish they could have got more decimal points in a measurement, or had known how to check for some factor. Despite all the gains and potential breakthroughs lost nobody is willing to repeat them or anything like them. I know just enough medically people given 2 weeks to live who were still around 10 years latter that I can't think of any situation where I'd make an exception.

      Though what a lot is is also open to question. Much of what we learned isn't that useful to real world problems. However some has been important.

      2 replies →

    • Learn how to torture? Maybe. Learn real knowledge? No. Most of those info are not just sick but also impractical.

      The goal of military is to protect or conquer. The goal of science is to find the truth, and the goal of the engineering is to offer solutions. Any of the true leaders in either fields knows there're more efficient means/systems to get those goals, even in ww2 era.

    • Experiments producing lots of data doesn't necessarily mean they were useful. If the experiment was run improperly the data is untrustworthy, and if the experiment was designed to achieve things that aren't useful they may not have controlled for the right variables.

      And ultimately, we know what their priorities were and what kind of worldview they were operating under, so the odds are bad that any given experiment they ran would have been rigorous enough to produce results that could be reproduced in other studies and applied elsewhere. I'm not personally aware of any major breakthroughs that would have been impossible without the "aid" of eugenicist war criminals, though it's possible there's some major example I'm missing.

      We certainly did bring over lots of German scientists to work on nukes and rockets, so your question is not entirely off-base - but I suspect almost everyone involved in those choices would argue that rocketry research isn't unethical.

    • By in large no. The Nazi experiments were based on faulty race science and were indistinguishable from brutal torture and what remains is either useless or impossible to reproduce for ethical reasons.

  • I'm a total neophyte when it comes to the Linux kernel development process, but couldn't they just, y'know, use a Gmail address or something? Couldn't the original researchers have done the same?

    • Yes, they could. This is actually addressed in the original email thread:

      > But they can't then use that type of "hiding" to get away with claiming it was done for a University research project as that's even more unethical than what they are doing now.

      2 replies →

  • Some CS labs at UMN take ethics very seriously. Their UXR lab for example.

    Other CS labs at UMN, well... apparently not so much.

  • Ethics are highly subjective on the margins. In this case they completely missed this issue. However the opposite is more often the case.

    A good example is challenge testing Covid vaccines. This was widely deemed to be unethical despite large numbers of volunteers. Perhaps a million lives could have been saved if we had vaccines a few months sooner.

    Research without ethics (as currently practiced) can have value.

    • I can't agree that widespread challenge testing would have been ethical. It's a larger topic than HN can accommodate, but some factors I consider important: (1) NPIs are effective at reducing transmission, (2) the consequences of an outcome with side effects could include global and long-lived anti-vax sentiment -- COVID19 is unlikely to be our last pandemic.

      Issue (2) arose with the EU response to rare AZ/J+J side effects, where I believe the EU is more deserving of criticism. They will undoubtedly cause more deaths in their own populations and throughout the world than would occur from clotting complications, but no one will hold them to account. But they weighed their equities as more important than global benefit.

      3 replies →

  • Life support machinery was developed with methods like cutting dog heads, plugging them in and see how long it shows signs of life.

    • If only we could have taught dogs to review kernel patch, ... we would probably be all out of work

Well, they had it coming. They abused the community's trust once in order to gain data for their research, and now it's understandable GKH has very little regard for them. Any action has consequences.

Uhhh, I just read the paper, I stopped reading when I read what I pasted below. You attempt to introduce severe security bugs into the kernel and this is your solution?

To mitigate the risks, we make several suggestions. First, OSS projects would be suggested to update the code of conduct by adding a code like "By submitting the patch, I agree to not intend to introduce bugs."

Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

That being said, I think it would've made more sense for them to have created some dummy complex project for a class and have say 80% of the class introduce "good code", 10% of the class review all code and 10% of the class introduce these "hypocrite" commits. That way you could do similar research without having to potentially break legit code in use.

I say this since the crux of what they're trying to discover is:

1. In OSS anyone can commit.

2. Though people are incentivized to reject bad code, complexities of modern projects make 100% rejection of bad code unlikely, if not impossible.

3. Malicious actors can take advantage of (1) and (2) to introduce code that does both good and bad things such that an objective of theirs is met (presumably putting in a back-door).

  • They could have contacted a core maintainer and explained to them what they planned to do. That core maintainer could have then spoken to other senior core maintainers in confidence (including Greg and Linus) to decide if this type of pentest was in the best interest of Linux and the OSS community at large. That decision would need to weigh the possibility of testing and hardening Linux's security review process against possible reputational damage as well as alienating contributors who might quite rightly feel they've been publicly duped.

    If leadership was on board, they could have then proceeded with the test under the supervision of those core maintainers who ensure introduced security holes don't find their way into stable. The insiders themselves would abstain from reviewing those patches to see if review by others catches them.

    If leadership was not on board, they should have respected the wishes of the Linux team and found another high-visibility open-source project who is more amenable to the project. There are lots of big open-source projects to choose from, the kernel simply happens to be high-profile.

    • Exactly. A test could have been conducted the knowledge of Linus and Greg K-H, but not of the other maintainers. If the proposed patch made it all the way through, it could be blocked at the last stage from making it into an actual release or release candidate. But it should be up to the people in charge of the project whether they want to be experimented on.

    • I don't disagree, but the point of the research is more to point out a flaw in how OSS supposedly is conducted, not to actually introduce bugs. If you agree with what they were researching (and I don't) any sort of pre-emptive disclosure would basically contradict the point of their research.

      I still think the best thing for them would be to simply create their own project and force their own students to commit, but they probably felt that doing that would be too contrived.

      3 replies →

  • > Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

    they could've done the much harder work of studying all of the incoming patches looking for bugs, and then just not reporting their findings until the kernel team accepts the patch.

    the kernel has a steady stream of incoming patches, and surely a number of bugs in them to work with.

    yeah it would've cost more, but would've also generated significant value for the kernel.

    • The point of the research isn't to study bugs, it's to study hypocrite commits. Given that a hypocrite commit requires intention, there's no other way except to submit commits yourself as the submitter would obviously know their own intention.

      2 replies →

So, for "research" you're screwing around the development of one of the most widely used components in the computer world. Worse, introducing security holes that could reach production environments...

That's a really stupid behavior ...

Very embarrassed to see my alma mater in the news today. I was hoping these were just some grad students going rogue but it even looks like the IRB allowed this 'research' to happen.

  • It's very likely the IRB was mislead. Don't feel too bad. I saw in one of the comments that the IRB was told that the researchers would be "sending emails," which seems to be an intentionally obtuse phrasing for them submitting malformed kernel patches.

So I won't lie, this seems like an interesting experiment and I can understand why the professor/research students at UMN wanted to do it, but my god the collateral damage against the University is massive. Banning all contributions from a major University is no joke. I also completely understand the scorched earth response from Greg. Fascinating.

I would check their ties to nation-state actors.

In closed source, nobody would even check. Modern DevOps has essentially replaced manual code review with unit tests.

  • I don't understand why this isn't a more widely-held sentiment. There's been instance after instance of corporate espionage in Western companies involving Chinese actors in the past 2 decades.

  • Yeah, state-actor scale sabotage was one of my first thoughts. And it gives me no joy to contemplate it.

    Secondly, the researcher’s attitude sounds high and mighty - making process improvement suggestions when their own ethical compass is in question. Their “experiment” was “what would happen if...”. Well, bans happen. If one starts a fight don’t get indignant over a bloody nose, lol

As a user of the linux kernel, I feel legal action against the "researchers" should be pursued.

  • I agree, I think they should be looking at criminal charges. This is the equivalent of getting a job at Ford on the assembly line and then damaging vehicles to see if anyone notices. I've been in software security for 13 years and the "Is Open Source Really Secure" question is so over done. We KNOW there is risk associated with open source.

  • I feel somewhat similar. Since I am using Linux, they ultimately were trying to break the security of my computers. If I do that with any company without their consent, I can easily end up in jail.

    • It's more than that, if there is no consequences for this kind of action, we are going to get a wave of "security researcher" wannabes trying to pull similar bullshit.

      Ps: I have put security researcher in quotes because this kind of thing is not security research, it's a publicity stunt.

    • >they ultimately were trying to break the security of my computers.

      No they weren't. They made sure the bad code never made it in. They are only guilty of wasting peoples time.

      2 replies →

    • How dare they highlight the vulnerability that exists in the process! The blasphemy!

      How about you think about what they just proved, about the actors that *actually* try to break the security of the kernel.

  • I believe as a user of the kernel the warranty exclusion in GPLv2 means you have no legal recourse:

    > 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

    https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html

    ..which is generally a good thing even if it also protects clearly malicious actions like this.

I used to sit on a research ethics board. This absolutely would not have passed such a review. Not a 'revise and resubmit' but a hard pass accompanied with 'what the eff were you thinking?. And, yes, this should have had a REB review: testing the vulnerabilities of a system that includes people is experimenting on human subjects. Doing so without their knowledge absolutely requires a strict human subject review and these "studies" would not pass the first sniff test. I don't think it's even legal in most jurisdictions.

  • This is my understanding as well, but then, how such paper was accepted by IEEE ?

    • Not sure. I expect that editors at such journals tend to assume that studies with an institutional sponsor will be held to professional standards by the sponsor, or take the authors' assertions at face value. I suspect that reviewers might have assumed that the study was done with the knowledge and permission of GNU project managers, even if not the line programmers (as in the case of ethical pen testing). That would make it less of an obvious ethical breach.

I did my Ph.D in cognitive neuroscience, where I conducted experiments on human subjects. Running these kinds of experiments required approval from an ethics committee, which for all their faults (and there are many), are quite good at catching this kind of shenanigans.

Is there not some sort of equivalent in this field?

  • It seems they lied to the ethics committee. But I'm not holding my breath for the University to sanction them or withdraw/EoC their papers, because Universities prefer to have these things swept under the carpet.

I guess someone had to do this unethical experiment, but otoh, what is the value here? There's a high chance someone would later find these "intentional bugs" , it's how open source works anyway. They just proved that OSS is not military-grade , but nobody thought so anyway

  • > They just proved that OSS is not military-grade , but nobody thought so anyway

    ...and yet FOSS and especially Linux is very widely used in military devices including weapons.

    Because it's known to be less insecure than most alternatives.

  • > They just proved that OSS is not military-grade...

    As if there is some other software that is "military-grade" by the same measure? What definition are you using for that term, anyway?

  • > but nobody thought so anyway

    A lot of people claim that there's a lot of eyes on the code and thus introducing vulnerabilities is unlikely. This research clearly has bruised some egos bad.

    • They were only banned after accusing Greg of slander after he called them out on their experiment and asked them to stop. They were banned for bring dishonest and rude.

    • > A lot of people claim that there's a lot of eyes on the code.

      Eric Raymond claimed so, and a lot of people repeated his claim, but I don't think this is the same thing as "a lot of people claim" -- and even if a lot of people claim something that is obviously stupid, it doesn't make the thing less obviously stupid, it just means it's less obvious to some people for some reasons.

      1 reply →

    • > A lot of people claim that there's a lot of eyes on the code

      And they are correct. Unfortunately sometimes the number of eyes is not enough.

      The alternative is closed source, which has prove to be orders of magnitude worse, on many occasions.

Aditya: I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

Greg: You can't quit, you're fired.

Interesting, if they provided to NSF human subject research section, to me this is potential research ethics issue.

Imagine, saying we would like to test how fire department responds to fire, by setting buildings on fire in NYC.

  • Well, just a small fire which you promise to extinguish yourself if they dont show up on time. Of course nobody can blame you if you didnt manage to extinguish it...

    Also the buildings are not random, but safety critical infrastructure, but this is good, you can advise later:'put a "please do not ignite" sign on the building'.

Should've at least sought approval from the maintainer party, and perhaps tried to orchestrate it so that the patch approver didn't have information about it, but some part of the org did.

In a network security analogy, this is just unsolicited hacking VS being a penetration test which it claims more so to be.

  • This is no better. All it does is increase the size of the research team. You’re still doing research on non-consenting participants.

Regardless of whether consent (which was not given) was required, worth pointing out the emails sent to the mailing list were also intentionally misleading, or fraudulent, so some kind of ethic has obviously been violated there.

Not wanting to play the devil's advocate here but though scummy, they still successfully introduced vulnerabilities to the kernel. Suppose the paper hadn't been released or an adversary had done it. How long they'll be lingering around if they're ever removed? The paper makes a case that FOSS projects shouldn't merely trust authority for security (neither the ones submitting or the ones reviewing) but utilize tools to find potential vulnerabilities for every commit.

  • > utilize tools to find potential vulnerabilities for every commit.

    The paper doesn't actually have concrete suggestions for tools, just hand-waving about "use static analysis tools, better than the ones you already use" and "use fuzzers, better than those that already exist."

    The work was a stunt to draw attention to the problem of malicious committers. In that regard, it was perhaps successful. The authors' first recommendation is for the kernel community to increase accountability and liability for malicious committers, and GregKH is doing a fantastic job at that by holding umn.edu accountable.

  • Coverity found at least one:

    vvv CID 1503716: Null pointer dereferences (REVERSE_INULL) vvv Null-checking "rm" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.

    and tools are useful, but given the resources and the know-how of those who compete in the IOCC I think we'd have to assume they'd be able to get something through. It'd have an even higher chance of success if it could be built to target a particular hardware combination (of a desired victim) as you could make the exploit dependent on multiple parts of the code (and likely nobody would ever determine the extent, as they'd find parts of it and fix them independently).

This is bullshit research. I mean, what they have actually found out through their experiments is that you can maliciously introduce bugs into the linux kernel. But, did anyone have doubts about this being possible prior to this "research"?

Obviously, bugs gets introduced into all software projects all the time. And the bugs don't know whether they've been put there intentionally or accidentally. Alls bugs that ever appeared in the linux kernel obviously made it through the review process. Even when no-one actively tried to introduce them.

So, why should it not be possible to intentionally insert bugs if it already "works" unintentionally? What is the insight gained from this innovative "research"?

I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

Responding properly to that statement would require someone to step out of the HN community guidelines.

This is a community that thinks it’s gross negligence if something with a real name on it fails to be airgapped.

Social shame and reputation damage may be useful defense mechanisms in general, but in a hacker culture where the right to make up arbitrarily many secret identities is a moral imperative, people who burn their identities can just get new ones. Banning or shaming is not going to work against someone with actual malicious intent.

  • It seems to be reacting and solving the wrong problem, and won't deter actual malicious attempts.

Wow this "researcher" is a complete disaster. Who nurtures such a toxic attitude of entitlement and disregard for others time and resources? Not to mention the possible real world consequences of introducing bugs into this OS. He and his group need to be brought before an IRB.

  • Victim mentality is being cultivated on campuses all over the US. This will not be the last incident like this.

I would say the research was a success. They found that when a bad actor submits malicious patches they are appropriately banned from the project.

  • It does seem like ultimately they played themselves by getting permanently banned from participating.

So be it. Greg is a very trusted member, and has overwhelming support from the community for swinging the banhammer. We have a living kernel to maintain. Minnesota is free to fork the kernel, build their own, recreate the patch process, and send suggestions from there.

I'm pretty confident the NSA has been doing this for at least two decades, it's not a crazy enough conspiracy theory.

Inserting backdoors in the form of bugs is not difficult. Just hijack the machine of a maintainer, insert a well placed semicolon, done!

Do you remember the quote of Linus Torvalds ? "Given enough eye balls, all bugs are shallow." ? Do you really believe the Linux source code is being reviewed for bugs?

By the way, how do you write tests for a kernel?

I like open source, but security implies a lot of different problems and open source is not always better for security.

FYI The IRB for University of Minnesota https://research.umn.edu/units/irb has a Human Research Protection Program https://research.umn.edu/units/hrpp where I cannot find anything on research on people without their permission. There is a Participant's Bill of Rights https://research.umn.edu/units/hrpp/research-participants/pa... that would seem to indicate uninformed research is not allowed. I would be curious how doing research on the reactions of people to test stimulus in a non-controlled environment is not human research.

One reviewers comments to a patch of theirs from 2 weeks ago

"Plainly put, the patch demonstrates either complete lack of understanding or somebody not acting in good faith. If it's the latter[1], may I suggest the esteemed sociologists to fuck off and stop testing the reviewers with deliberately spewed excrements?"

https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...

The project is interesting, but how can they be so dumb as to post these patches under an @umn.edu address instead of using a new pseudonymous identity for each patch?!?

I mean, sneakily introducing vulnerabilities obviously only works if you don't start your messages by announcing you are one of the guys known to be trying to do so...

  • That's kind of the rub. They used a university email to exploit the trust afforded to them as academics and then violated that trust. As a result that trust was revoked. If they want to submit future patches they'll need to do it with random email addresses and will be subject to the scrutiny afforded random email addresses.

    • I doubt an university e-mail gives you significantly increased trust in the kernel community, since those are given to all students in all majors (most of which are of course much less competent at kernel development than the average kernel developer).

      2 replies →

I am wondering if Aditya didn't respond the way he did (using corporate lawyer's langauge), Greg would have not reached to this conclusion? I am a bit surprised by the entitlement he was showing. Why would anyone use those words despite sending a nonsense patch! What kind of defence he was thinking he had among a group of seasoned developers other than being honest about intentions? I wouldn't be surprised if his professor doesn't even know what he was doing!

This seems like wanton endangerment. Kernels get baked into medical devices and never, ever updated.

I would be livid if I found that code from these "researchers" was running in a medical device that a family member relied upon.

I suspect the university will take some sort of action now that this has turned into incredibly bad press (although they really should have done something earlier).

WTF? They are experimenting with people without their consent? And they haven't been kicked out of the academic community????

Yikes, and what are they hoping to accomplish with this "research"?

I have a question for this community:

Insofar as this specific method of injecting flaws matches a foreign country's work done on U.S. soil - as many people in this thread have speculated - do people here think that U.S. three letter agencies (in particular NSA/CIA) should have the ability to look at whether the researchers are foreign agents/spies, even though the researchers are operating from the United States? For example, should the three letter agencies have the ability to review these researchers' private correspondence and social graphs?

Insofar as those agencies should have this ability, then, when should they use it? If they do use it, and find that someone is a foreign agent, in what way and with whom should they share their conclusions?

Now one of the problems with research in general is that negative results don't get published. While in this case it probably resolved itself automatically, if they have any ethical standards then they'll write a paper about how it ended. Something like "our assumption was that it's relatively easy to deliberately sneak in bugs into the Linux kernel but it turns out we were wrong. We managed to get our whole university banned and all former patches from all contributors from our university, including from those outside of your our research team, reversed."

Also, while their assumption is interesting, there sure had to be an ethical and safe way to conduct this. Especially without allowing their bugs to slip into release.

From an outsider, the main question is: does this expose an actual weakness in the Linux development model?

From what I understand, this answer seems to be a "yes".

Of course, it is understandable that GKH is frustrated, and if his community do not like someone pointing out this issue, it is OK too.

However, one researcher does not represent the whole university, so it seems immature to vent this to other unrelated people just because you can.

  • The main issue is that the researchers are now untrustworthy because they conducted this experiment without permission. Essentially, the kernel dev team can no longer trust that any given patch from U of M isn't the same research team using a different email address to submit more malicious patches.

  • The university has an ethics board to review experiments. So what experiments get allowed reflects on the whole university

    • If you are actually in a graduate school, you will know it is practically impossible to review details like this, otherwise nobody can do any real work.

      Besides, how to test the idea without doing what they did? Can you show us a way?

I feel like q lot of people here did not interpret this correctly.

As far as it's known, garbage code was not introduced into kernel.It was caught in the review process literally on the same day.

However, there has been merged code from the same people, which is not necessarily vulnerable. As a precaution the older commits are also being reverted, as these people have been identified as bad actors

  • Note that the commits which have been merged previously have also been intentionally garbage and misleading code, just without any obvious way to exploit them. For example, https://lore.kernel.org/lkml/20210407000913.2207831-1-pakki0... has been accepted since April 7, and it's an obviously a commit meant to _look_ like a bug fix while having no actual effect. (The line `rm = NULL;` and the line `if (was_on_sock && rm)` operate on different variables called `rm`.)

    That means that the researchers got bogus code into the kernel, got it accepted, and then said nothing for two weeks as the bogus commit spread through the Linux development process and ended up in the stable tree, and, potentially, in forks.

This is categorically unethical behaviour. Attempting to get malicious code into an open source project that powers a large set of the worlds infrastructure — or even a small project — should be punished in my view. Actors are known, its been stated by the actors as intentional.

I think the Linux Foundation should make an example of this.

"Yesterday, I took a look on 4 accepted patches from Aditya and 3 of them added various severity security "holes"."

Sorry for being the paranoid one here, but reading this raises a lot of warning flags.

Regardless of their methods, I think they just proved the kernel security review process is non-existent. Either in the form of static analysis or human review. Whats being done to address those issues?

UMN has some egg on their face, surely, but I think the IEEE should be equally embarrassed that they accepted this paper.

Seems like completely pointless "research." Clearly it wasted the maintainers' time, but also the "researchers" investigating something that is so obviously possible. Weren't there any real projects to work on?

> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

Maybe not being nice is part of the immune system of open source.

  • On the other thread, I suggested this was an attack on critical infrastructure using a university as cover and that this was a criminal/counter-intellgence matter, and then asked whether any of these bug submitters also suggested the project culture was too aggressive and created an unsafe environment, to reduce scrutiny on their backdoors.

    Talk about predictive power in a hypothesis.

    • Given its ubiquity in so many industries, tampering with Linux kernel security sounds an awful lot like criminal sabotage under US law.

      Getting banned from contributing is a light penalty.

      5 replies →

    • "I suggested this was an attack on critical infrastructure using a university as cover and that this was a criminal/counter-intellgence matter"

      There is absolutely zero evidence of this. None. In my opinion it's baseless speculation.

      It's far more likely that they are upset over being called out, and are out of touch with regards as to what is ethical testing.

      6 replies →

  • "We're banning you for deliberately submitting buggy patches as an experiment."

    "Well if you're gonna be a jerk about it, I won't be sending any more patches."

    • "I can excuse r̶a̶c̶i̶s̶m̶ wasting OSS maintainers time, but I draw the line at rudeness!" - (community)

  • There is nothing about enforcing high standards that requires hostility or meanness. In this case the complaint that greg is being intimidating is being made entirely in bad faith. I don't think anyone else has a problem with greg's reply. So this doesn't really come across as an example that demonstrates your "not being nice is necessary" view.

  • I think so. With a large project I think a realist attitude that raises to the level of mean when there’s bullshit around is somewhat necessary to prevent decay.

    If not you get cluttered up with bad code and people there for the experience. Like how stackoverflow is lost to rule zealots there for the game not for the purpose.

    Something big and important should be intimidating and isn’t a public service babysitter...

    • It feels like a corollary of memetic assholery in online communities. Essentially the R0 [0] of being a dick.

      If I have a community, bombarded by a random number of transient bad actors at random times, then if R0 > some threshold, my community inevitably trends to a cesspool, as each bad actor creates more negative members.

      If I take steps to decrease R0, one of which may indeed be "blunt- and harshness to new contributors", then my community may survive in the face of equivalent pressures.

      It's a valid point, and seems to have historical support via evidence of many egalitarian / welcoming communities collapsing due to the accumulation of bad faith participants.

      The key distinction is probably "Are you being blunt / harsh in the service of the primary goal, or ancillary to the mission?"

      [0] https://en.m.wikipedia.org/wiki/Basic_reproduction_number

      4 replies →

    • I'm not sure why you think you have to be mean to avoid bad code. Being nice doesn't mean accepting any and all contributions. It just means not being a jerk or _overly_ harsh when rejecting.

    • You can create a strict, high functioning organization without being an asshole. Maintaining high standards and expecting excellence isn't an exercise in babysitting; it's an exercise in aligning contributors to those same standards and expectations.

      You don't need to do that by telling them they're garbage. You can do it by getting them invested in growth and improvement.

      1 reply →

  • > Maybe not being nice is part of the immune system of open source.

    Someone for whom being a bad actor is a day job will not get deterred by being told to fuck off.

    Being nasty might deter some low key negative contributors - maybe someone who overestimates their ability or someone "too smart to follow the rules". But it might also deter someone who could become a good contributor.

  • Being rude isn't going to discourage malicious actors, who are motivated by fame or wealth.

    If you ran a bank and had a bunch of rude bank tellers, you are only going to dissuade customers, not bank robbers.

    • Being nice is expensive, and sending bad code imposes costs on maintainers, so the sharp brevity of maintainers is efficient, and in cases where the submitter has wasted the maintainers time, the maintainer should impose a consequence by barking at them.

      Sustaining the belief that every submitter is an earnest, good, and altruistic person is painfully expensive and a waste of very valuable minds. Unhinged is unhinged and that needs to be managed, but keeping up the farce that there is some imaginary universe where the submitter is not wasting your time and working the process is wrong.

      I see this in architecture all the time, where people feign ignorance and appeal to this idea you are obligated to keep up the pretense that they aren't being sneaky. Competent people hold each other accountable. If you can afford civility, absolutely use it, but when people attempt to tax your civility, impose a cost. It's the difference between being civil and harmless.

    • A better analogy: Attempting to pee in the community pool to research if the maintainers are doing a good job of managing the hygiene standards.

  • Honestly WTF would a "newbie and non-expert" have to do with sending KERNEL PATCHES.

    • Personally I don't think you can become an expert in Linux kernel programming without sending patches. So over the long term, if you don't let non-experts submit patches then no new experts will ever be created, the existing ones will die or move on, and there won't be any experts at all. At that point the project will die.

      1 reply →

    • Nobody is an expert on every subject. You could have PhD level knowledge of the theory behind a specific filesystem or allocator but know next to nothing about the underlying hardware.

      4 replies →

    • So they can tell companies "I am a contributor to the Linux kernel"..there are charlatans are in every field. Assuming this wasn't malicious and "I'm a newbie" isn't just a cover.

      1 reply →

  • Attacking those critical of your questionable behavior and then refusing to participate further is a common response people have when caught red handed.

    This is just a form of "well I'll just take my business elsewhere!". Chances are he'll try again under a pseudonym.

  • Every time I have seen Theo from the OpenBSD project come down hard on someone, it was deserved.

  • But G. K-H's correspondence here is completely cordial and professional, and still gets all the results that were needed?

  • I disagree, I think it's important to be nice and welcoming to contributors but the immune system should be a robust code of conduct which explicitly lists things like this that will result in a temporary or permanent ban

  • I'm curious what sort of lawsuits might be possible here. I for one would donate $1000 to a non-profit trust formed to find plaintiffs for whatever possible cause and then sue the everloving shit out of the author + advisor + university as many times as possible.

    EDIT: University is fair game too.

  • Absolutely. The derision that people like Linus get for being “mean” to big corpos trying to submit shitty patches is totally misplaced.

  • Not being nice is always to protect self. Not always effective though, and not always necessary.

  • Instead of not being nice, maybe Linux should adopt some sort of CI and testing infrastructure.

    • https://kernelci.org is a Linux Foundation project; there are others, but that's just the main one I know of offhand.

      The idea that "not being nice" is necessary is plainly ridiculous, but this post is pretty wild--effectively you're implying that they're just amateurs or something and that this is a novel idea nobody's considered, while billions and billions of dollars of business run atop Linux-powered systems.

      What they don't do is hand over CI resources to randos submitting patches. That's why kernel developers receive and process those patches.

    • Linux are have plenty testing machines, but it are not so simple that you seem to think for to test whole kernel. there is not any catching for all these possible cases, so not nice remain importance. and greater part is driver, driver need device for to work, so CI on this is hard.

Linux maintainers should log a complaint with the University's ethics board. You can't just experiment on people without consent.

  • One of the other emails in the chain says they already did.

    > This is not ok, it is wasting our time, and we will have to report this, AGAIN, to your university...

  • I have a theory that while the university's ethics board may have people on it who are familiar with the myriad of issues surrounding, for instance, biomedical research, they have nobody on it with even the most cursory knowledge of open source software development. And nobody who has even the faintest idea of how critically important the Linux kernel is to global infrastructure.

    • They should also have people on it who are familiar with psychology research. The issues with this research the types of things psychology research should find.

  • I agree. They are attempting to put security vulnerabilities into a security-critical piece of software that is used by billions of people. This is clearly unethical and unacceptable.

  • I always find the dichotomy we have regarding human subject experimentation interesting in the US. We essentially have two ecosystems of human subjects as to what is allowed and isn't: public and privately funded. The contrast is a bit stark.

    We have public funded rules (typically derived or pressured by availability of federal or state monies/resources) which are quite strict, have ethics and IRB boards, cover even behavioral studies like this where no direct physical harm is induced but still manipulates peoples' behaviors. This is the type of experiment you're referring to where you can't experiment on people without their consent (and by the way, I agree with this opinion).

    Meanwhile, we have private funded research which has a far looser set of constraints and falls into everyday regulations. You can't really physically harm someone or inject syphilis in them (Tuskegee experiments) which makes sense, but when we start talking about human subjects in terms of data, privacy of data, or behavioral manipulation most regulation goes out the window.

    These people likely could be reprimanded, even fired, and scarlet lettered making their career going forward more difficult (maybe not so much in this specific case because it's really not that harmful) but enough to screw them over financially and potentially in terms of career growth.

    Meanwhile, some massive business could do this with their own funding and not bat an eye. Facebook could do this (I don't know why they would) but they could. Facebook is a prime example of largely unregulated human subject experimentation though. Social networks are a hotbed for data, interactions, and setting up experimentation. It's not just Facebook though (they're an obvious easy target), it's slews of businesses collecting data and manipulating it around consumers: marketing/advertising, product design/UX focusing on 'engagement', and all sorts of stuff. Every industry does this and that sort of human subject experimentation is accepted because $money$. Meanwhile, researchers from public funding sources are crucified for similar behaviors.

    I'm not defending this sort of human subject experimentation, it's ethically questionable, wrong, and should involve punishment. I am however continually disgusted by the double standard we have. If we as a society really think this sort of experimentation on human subjects or human subject data is so awful, why do we allow it to occur under private capital and leave it largely unregulated?

  • I'm not sure it is experimenting people without consent. Though it's certainly shitty and opportunitstic of UoM to do this.

    Linux Bug fixes are open to the public. The experiment isn't on people but on bugs. I would be like filing different customer support complaints to change the behavior of a company -- you're not experimenting on people but the process of how that company interfaces with the public.

    I see no wrong here including the Linux maintainers banning submissions from UoM which is completely justified as time wasting.

CS researchers at the University of Chicago did a similar experiment on me and other maintainers a couple years ago: https://github.com/lobsters/lobsters/issues/517

And similarly to U Minn, their IRB covered for them: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...

My experience felt really shitty, and I'm sorry to see I'm not alone. If anyone is organizing a broad response to redress previous abuses or prevent future abuse, I'd appreciate hearing about it, my email's on my profile.

This is supremely fucked up and I’d say is borderline criminal. It’s really lucky asshole researchers like this haven’t caused a bug that cost billions of dollars, or killed someone, because eventually shit like this will... and holy shit will “it was just research” do nothing to save them.

  • How come there's no ethical review for research that interacts with people? (I mean it's there in medicine and psychology, and probably for many economics experiments too.)

    edit: oh, it seems they got an exemption, because it's software research - https://news.ycombinator.com/item?id=26890084 :|

    • I can’t imagine it will stay that way forever. As more and more critical tools and infrastructure go digital, allowing people to just whack away at them or introduce malicious/bad code in the name of research is just going to be way too big of a liability.

      1 reply →

  • This is actually just the elitist version of "it's just a prank, bro!"

    And you're right, bugs in the linux kernel could have serious consequences.

  • Any organization that would deploy software that could kill someone without carefully personally reviewing it for fitness of purpose especially when the candidate software states that it waives all liability and waives any guarantee that it is fit for purpose as stated in sections 11 and 12 of the GPLv2 [1] is criminally irresponsible. Though it is scummy to deliberately introduce defects into a OSS project, any defects that result in a failure to perform are both ethically and legally completely on whoever is using Linux in a capacity that can cost billions of dollars or kill someone.

    [1] https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html

  • I agree, I think a more broad ban might be in order. I don't know that I'd want anyone from this "group" contributing to anything.

  • So aren’t there tests and code reviews before pushing them to the Stable code base?

    • Yes, there are. Will they find everything? No. Would I be pissed, if this caused silent corruption of my filesystem, or some such crap that's hard to test, due to this uni trying to push in memory misuse vulnerabilities into the kernel into some obscure driver that is not normally that much tested, but I use it on my SBC farm? Yes.

      Maybe they had some plan for immediate revert when the bogus patch got into stable, but some people update stable quickly, for a good reason, and it's just not good to do this research this way.

  • I agree that it's bad behavior, but if you have billions of dollars resting on open-source infrastructure, you better know the liabilities involved.

  • It’s just a shame there is no mechanism in the license to withdraw permission for this so-called university to use Linux at all

    • It is by design, not having these mechanism is one of the goals of free software: free for everyone, no exceptions.

      See JSON.org License which says it "shall be used for Good, not Evil" and is not considered free software.

      1 reply →

    • That is expressly the opposite goal of open source. If you arbitrarily say foo user cannot use your software, then it is NOT open source. That's more like source-available.

      Nobody would continue to use linux if they randomly banned people from using it, regardless of the reason.

      [side note] This is why I despise the term "open source". It obscures the important part of user freedom. The term "Free/libre software" is not perfect, but it doesn't obscure this.

There is so much disdain for unethical, ivory tower thinking in universities, this is not helping.

But, allow me to pull a different thread. How liable is the professor, the IRB, and the university if there is any calamity caused by the known code?

What is the high level difference between their action, and spreading malware intentionally?

Out of curiosity, what would be an actually good way to poke at the pipeline like this? Just ask if they'd OK a patch w/o actually submitting it? A survey?

  • Probably ask the maintainers to consent and add some blinding so that the patches look otherwise legitimate.

  • Ask about this upfront, get consent, wait rand()*365 days and do the same thing they did. Inform people immediately after it got accepted.

  • This is a good question. You would recruit actual maintainers, [edit: or whoever is your intended subject pool] (who would provide consent, perhaps be compensated for their time). You could then give them a series of patches to approve (some being bug free and others having vulnerabilities).

    [edit: specifying the population of a study is pretty important. Getting random students from the University to approve your security patch doesn't make sense. Picking students who successfully completed a computer security course and got a high grade is better than that but again, may not generalize to the real world. One of the most impressive ways I have seen this being done by grad students was a user study by John Ousterhout and others on Paxos vs. Raft. IIRC, they wanted to claim that Raft was more understandable or led to fewer bugs. Their study design was excellent. See here for an example: https://www.youtube.com/watch?v=YbZ3zDzDnrw&ab_channel=Diego... ]

    • If an actual maintainer (i.e. an "insider") approves your bug, then you're not testing the same thing (i.e. the impact an outsider can have), are you?

      2 replies →

    • This wouldn't really be representative. If people know they are being tested, they will be much more careful and cautious than when they are doing "business as usual".

Sending those patches is just disgraceful. I guess they're using the edu emails so banning the university is a very effective action so someone will respond to it. Otherwise, the researchers will just quietly switch to other communities such as Apache or GNU. Who want buggy patches?

this is not surprising to me given the quality of minnesotta universities. U of M should be banned from existence. I remember vividly how they'd break their budgets redesigning cafeterias, hiring low quality 'professors' that refused to make paper assignments digitized. (They didnt know how). Artificially inflated dorm costs without access to affordable cooking. (Meal plans only). They have bankrupted plenty of students that were forced to drop out due to their policies on mental health. It's essentially against policy to be depressed or suicidal. They predate on kids in high school who don't at all know what they're signing up for.

Defund federal student loans. Make these universities stand on their own two feet or be replaced by something better.

The professor is going to give a ted talk in about a year talking about how he got banned from open source development and the five things he learned from it.

How is such a ban going to be effective? The "researchers" could easily continue their experiments using different credentials, right?

  • Arbitrary anonymous submissions don't go into the kernel in general. The point[1] behind the Signed-off-by line is to associate a physical human being with real contact information with the change.

    One of the reason this worked is likely that submissions from large US research universities get a "presumptive good faith" pass. A small company in the PRC, for an example, might see more intensive review. But given the history of open source, we trust graduate students maybe more than we should.

    [1] Originally legal/copyright driven and not a security feature, though it has value in both domains.

    • > A small company in the PRC, for an example, might see more intensive review.

      Which is a bit silly, isn't it? Grad students are poor and overworked, it seems easy to find one to trick/bribe into signing off your code, if you wanted to do something malicious.

      4 replies →

    • They do if the patch "looks good" to the right people.

      In late January I submitted a patch with no prior contributions, and it was pushed to drm-misc-next within an hour. It's now filtered it's way through drm-next and will likely land in 5.13.

      2 replies →

  • The ban is aimed more at the UMN dept overseeing the reserach than at preventing continued "experiments." I imagine it would also make continued experiments even more unethical.

  • > How is such a ban going to be effective?

    It trashes University of Minnesota in the press. What is going to happen is that the president of the university now is going to hear about it, so will the provost and so will people in charge of doling money. That will rapidly fix the professor problem.

    While people may think that tenure professors get to do what they want, they never win in a war with a president and a provost. That professor is toast. And so are his researchers

  • Any data collected from such "research" would be unpublishable and therefore worthless.

  • Their whole department/university just got officially banned. If they attempt to circumvent that, the authorities would probably be involved due to fraud.

  • Thus moving from merely unethical to actually fraudulent? Although from the email exchanges it seems they are already making fraudulent statements...

    At least it might prompt the University to take action against the researchers.

  • I believe this is so that the university treats the reports seriously. It's basically a "shit's broken, fix it". The researchers are probably under a lot of pressure from the rest of the university right now.

  • If you're a young hacker that wants to get into kernel development as a career, are you going to consider going to a university that has been banned from officially participating in development for arguably the most prolific kernel?

    The next batch of "researchers" won't be attending the University of Minnesota, and other universities scared of the same fate (missing out on tuition money) will preemptively ban such research themselves.

    "Effective" isn't binary, and this is a move in the right direction.

Let me play devil's advocate here though. This is absolutely necessary and shows the process in the kernel is vulnerable.

Sure, this is "just" a university research project this time. And sure, this is done in bad taste.

But there are legitimately malicious national actors (well, including the US govt and the various 3 letter agencies) that absolutely do this. And the national actors are likely even far more sophisticated than a couple of PhD students. They have the time, resources and energy to do this over a very long period of time.

I think on the whole, this is very net positive in that it reveals the vulnerability of open source kernel development. Despite, how shitty it feels.

  • Let me pile on top of that and note that if Linus had listened to his elders and used a Microkernel instead of the monolith, the kernel would be small enough that this kind of thing wouldn't be happening.

Sure. And we are well past the time in which we need to develop real legal action and/or policy -- with consequences against this sort of thing.

We have an established legal framework to do this. It's called "tort law," and we need to learn how to point it at people who negligently or maliciously create and or mess with software.

What makes it difficult, of course, is that not only should it be pointed at jerk researchers, but anyone who works on software, provably knows the harm their actions can or do cause, and does it anyway. This describes "black hat hackers," but also quite a few "establishment" sources of software production.

<consipracy theory>This is intentionally malicious activity conducted with a perfect cover story</conspiracy theory>

Where does such "research" end... sending phishing mails to all US citizens to see how many passwords can be stolen?

Ah yes, showing those highly paid linux kernel developers how broken their system of trust and connection is! Great work.

Now if we can only find more open source developers to punish for trusting contributors!

Enjoy your ban.

Sorry if this comment seems off base, this research feels like a low blow to people trying to do good for a largely thankless job.

I would say they are violating some ideas of Ken Thompson: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...

I am honestly surprised anything like this can pass the ethic committee. The reputational risk seems huge.

For example, in economics departments there is usually a ban on lying to experiment participants. Many of them even explicitly explain to participants that this is a difference between economics and psychology experiments. The reason is that studying preferences is very important to economists, and if participants don’t believe that the experiment conditions are reliable, it will screw the research.

If the university was doing research then they should publish their findings on this most recent follow up experiment.

Suggested title:

“Linux Kernel developers found to reject nonsense patches from known bad actors”

As a side note to all of the discussion here, it would be really nice if we could find ways to take all of the incredible linux infrastructure, and repurpose it for SeL4. It is pretty scary that we've got ~30M lines of code in the kernel and the primary process we have to catch major security bugs is to rely on the experienced eyes of Greg KH or similar. They're awesome, but they're also human. It would be much better to rely on capabilities and process isolation.

Who funds this? They acknowledge funding from the NSF but you could imagine that it would benefit some other large players to sow uncertainty and doubt about Open Source Software.

Shouldn't the university researchers compensate their human guinea pigs with some nice lettuce?

I think it's a fair measure, albeit drastic.

What happens if any of that patches ends up in a kernel release?

It's like setting random houses on fire just to test the responsiveness of local firefighters.

I don't know how their IRB approved this, although we also don't know what details the researchers gave the IRB.

It had a high human component because it was humans making many decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch.

If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.

This not only erodes trust in the University of Minnesota, but also erodes trust in the Linux kernel.

Imagine how downstream consumers of the kernel could be affected. The kernel is used for some extremely serious applications, in environments where updates are nonexistent. These bad patches could remain permanently in situ for mission-critical applications.

The University of Minnesota should be held liable for any damages or loss of life incurred by their reckless decision making.

This is insulting. The whole premise behind the paper is that open source developers aren't able to parse comits for malicious code. From a security standpoint, sure, I'm sure a bad actor could attempt to do this. But the fact that he tried this on the linux kernel, an almost sacred piece of software IMO, and expected it to work takes me aback. This guy either has a huge ego or knows very little about those devs.

I'd be interested if there's a more ethical way to do this kind of research, that wouldn't involve actually shipping bugs to users. There certainly is some value in kind of "penetration testing" things to see how well bad actors could get away with this kind of stuff. We basically have to assume that more sophisticated actors are doing this without detection...

Using faked identity and faked papers to expose loopholes and issues in an institution is not news in science community. Kernel community may not be immune to some common challenges for any sizable institution I assume, so some ethical hacking here seems reasonable.

However, doing it repeatedly with real names seems not helpful to the community and indicates a questionable motivation.

The ban seems rational, when viewed in the context of kernel development.

The benefit is twofold: (a) it's simpler to block a whole university than it is to figure out who the individuals are and (b) this sends a message that there is some responsibility at the institutional level.

The risk is that someone writing from that university address might have something that would be useful to the software.

Getting patches and pull-requests accepted is not a guaranteed. And it's asking a lot of kernel developers that they check not just bad code but also for badly-intended code.

I had a look at the research paper (https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...) and it saddens me to see such a thing coming out of a university. It's like a medical researcher introducing a disease to see whether it spreads quickly.

I can't help but think of the Sokal affair. But I'll leave the comparison to someone more knowledgeable about them both.

  • I'd bet that it was inspired by the Sokal affair. The difference in reaction is probably because people think the purity of Linux is important but the purity of obscure academic journals isn't. (They're probably right, because one fault in Linux will make the whole system insecure, whereas one dumb paper would go in next to the other dumb papers and leave the good papers unharmed.)

    The similarities are that reviewers can get sleepy no matter what they're reviewing. Troll doll QC staff get sleepy. Nuclear reactor operators get sleepy too.

    • > The similarities are that reviewers can

      Most people in the outgroup who know about the Sokal Affair but who know nothing about the journal they submitted to aren't aware of this, but Social Text was known to be not peer reviewed at the time. It's not that reviewers failed some test; there explicitly and publicly wasn't a review process. Everyone reading Social Text at the time would have known that and interpreted contents accordingly, so Sokal didn't demonstrate anything of value and was just being a jackass.

Is there a more readable version of this available somewhere? I really struggle to follow the unformatted mailing list format.

  • Scroll down to the "thread overview". There you can see the thread summarized in a tree layout, which makes more sense since asynchronous discussion isn't typically linear.

    The current message in the tree is highlighted with the indicator "[this message]"; you can see replies branch out below it and parent messages above it.

  • Just keep hitting the "next" link to follow the thread.

    • The next link is one hyperlink buried in the middle of the wall of text, and simply appends the new message to the existing one. It also differentiates between prev and parent?

      It's super unclear.

      2 replies →

Interesting tidbit from the prof's CV where he lists the paper, interpret from it what you will[1]:

> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits

> Qiushi Wu, and Kangjie Lu.

> To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

> Note: The experiment did not introduce any bug or bug-introducing commit into OSS. It demonstrated weaknesses in the patching process in a safe way. No user was affected, and IRB exempt was issued. The experiment actually fixed three real bugs. Please see the clarifications[2].

1: https://www-users.cs.umn.edu/~kjlu/

2: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

So FOSS is insecure if maintainers are lazy? This would hold true for any piece of software, wouldn't it? The difference here is that even though the "hypocrite commits" /were/ accepted, they were spotted soon after. Something that might not have happened quite as quickly in a closed source project.

I have to wonder what's going to happen to the advisor who oversaw this research. This knee-caps the whole department when conducting OS research and collaboration. If this isn't considered a big deal in the department, it should be. I certainly wouldn't pursue a graduate degree there in OS research now.

What I dont get... why not ask the board of the Linux foundation if they could attempt social engineering attacks and get authorization. If Linux foundation sees value they'd approve it and who knows maybe such tests (hiring pentesters to do social engineering) are done anyway by the Linux foundation.

This seems like a pretty scummy way to do "research". I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low. It's not that they're doing this, I'm sure they're not the first to think of this (for research or malicious reasons), but having the gall to brag about it is a new low.

  • > having the gall to brag about it is a new low

    Even worse: They bragged about it, then sent a new wave of buggy patches to see if the "test subjects" fall for it once again, and then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

    This is thinly veiled and potentially dangerous bullying.

    • > This is thinly veiled and potentially dangerous bullying.

      Which itself could be the basis of a follow up research paper. The first one was about surreptitiously slipping vulnerabilities into the kernel code.

      There's nothing surreptitious about their current behavior. They're now known bad actors attempting to get patches approved. First nonchalantly, and after getting called out and rejected they framed it as an attempt at bullying by the maintainers.

      If patches end up getting approved, everything about the situation is ripe for another paper. The initial rejection, attempting to frame it as bullying by the maintainers (which ironically, is thinly veiled bullying itself), impact of public pressure (which currently seems to be in the maintainers' favor, but the public is fickle and could turn on a dime).

      Hell, even if the attempt isn't successful you could probably turn it into another paper anyway. Wouldn't be as splashy, but would still be an interesting meta-analysis of techniques bad actors can use to exploit the human nature of the open source process.

      10 replies →

    • It isn't even bullying. It is just dumb?

      Fortunately, the episode also suggests that the kernel-development immune-system is fully-operational.

      8 replies →

    • There are some activities that should be "intimidating to newbies" though, shouldn't there? I can think of a lot of specific examples, but in general, anything where significant preparation is helpful in avoiding expensive (or dangerous) accidents. Or where lack of preparation (or intentional "mistakes" like in this case) would shift the burden of work unfairly onto someone else. Also, a "newbie" in the context of Linux system programming would still imply reasonable experience and skill in writing code, and in checking and testing your work.

    • I'm gonna go against the grain here and say I don't think this is a continuation of the original research. It'd be a strange change in methodology. The first paper used temporary email addresses, why switch to a single real one? The first paper alerted maintainers as soon as patches were approved, why switch to allowing them to make it through to stable? The first paper focused on a few subtle changes, why switch to random scattershot patches? Sure, this person's advisor is listed as a co-author of the first paper, but that really doesn't imply the level of coordination that people are assuming here.

      2 replies →

    • >then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

      As soon as I read that all sympathy for this clown was out the window. He knows exactly what he's doing.

    • Why not just call it what it is: fraud. They tried to deceive the maintainers into incorporating buggy code under false pretenses. They lied (yes, let's use that word) about it, then doubled down about the lie when caught.

    • This looks a very cynical attempt to leverage PC language to manipulate people. Basically a social engineering attack. They surely will try to present it as pentest, but IMHO it should be treated as an attack.

  • >I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low.

    I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations. My personal observation is that infosec/cybersecurity academia has been steadily moving to higher ethical standards in research. That doesn't mean that all academics follow this trend, but that unethical research is more likely to get your paper rejected from conferences.

    Submitting bugs to an open source project is the sort of stunt hackers would have done in 1990 and then presented at a defcon talk.

    • > I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations.

      IEEE seems to have no problem with this paper though.

      >>> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

      from https://www-users.cs.umn.edu/~kjlu/

      47 replies →

  • Yup, it's basically stating the obvious: that any system based on an assumption of good faith is vulnerable to bad faith actors. The kernel devs are probably on the lookout for someone trying to introduce backdoors, but simply introducing a bug for the sake of introducing a bug (without knowing if it can be exploited), which is obviously much easier to do stealthily - why would anyone do that? Except for "academic research" of course...

    • > why would anyone do that?

      I can think of a whole lot of three letter agencies with reasons to do that, most of whom recruit directly from universities.

    • Academic research, cyberwarfare, a rival operating system architecture attempting to diminish the quality of an alternative to the system they're developing, the lulz of knowing one has damaged something... The reasons for bad-faith action are myriad, as diverse as human creativity.

    • In theory wouldn't it be possible to introduce bugs that are seemingly innocuous when reviewed independently but when combined form and exploit?

      Could a number of seemingly unrelated individuals introduce a number of bugs over time to form and exploit without being detected?

      5 replies →

  • I believe this is violating research ethics hard, very hard. Reminds me if someone was aiming at researching childs' mental development through the study of inflicting mental damages. The subjects and the likely damages are not similar but the approach and mentality are inconveniently so.

    • yep first thing I thought was how did this get through the research ethics panel (all research at my University has to get approval).

      1 reply →

  • To me, this seems like a convoluted way to hide malicious actions as research, (not the other way around). This smells of intentional vulnerability introduction under the guise of academic investigation. There are millions of other, less critical, open source solutions this "research" could have tested on. I believe this was an intentional targeted attack, and it should be treated as such.

  • The "scientific" question answered by the mentioned paper is basically:

    "Can open-source maintainers make a mistake by accepting faulty commits?"

    In addition to being scummy, this research seems utterly pointless to me. Of course mistakes can happen, we are all humans, even the Linux maintainers.

  • This observation may very well get downvoted to oblivion: what UMN pulled is the Linux kernel development version of the Sokal Hoax.

    Both are unethical, disruptive, and prove nothing about the integrity of the organizations they target.

    • Except for Linux actively running on 99% of all servers on the planet. Vulnerabilities in Linux can literally kill people, open holes for hackers, spies, etc.

      Submitting a fake paper to a journal read by a few dozen academics is a threat to someones ego. It is not in the same ballpark as a threat to IT infrastructure everywhere.

  • Agreed. Plus, I find the "oh, we didn't know what we were doing, you're not an inviting community" social engineering response, completely slimey and off-putting.

  • Technically analogous to pen testing except that it wasn’t done at the behest of the target, as legal pen testing is done. Hence it is indistinguishable from and must be considered, a malicious attack.

  • Unfortunately, we cannot be sure it is low for today's academia. So many people working there, with nothing useful to do other than flooding the conferences and journals with papers. They are desperate for anything that could be published. Plus, they know that the standards are low, because they see the other publications.

  • Devil's advocate, but why? How is this different from any other white/gray-hat pentest? They tried to submit buggy patches, once approved they immediately let the maintainers know not to merge them. Then they published a paper with their findings and which weak parts in the process they thing are responsible, and which steps they recommend be taken to mitigate this.

    • Very easy, if its not authorized it's not a pentest or red team operation.

      Any pentester or red team considers their profession an ethical one.

      By the response of the Linux Foundation, this is clearly not authorized nor falling into any bug bounty rules/framework they would offer. Social engineering attacks are often out of bounds for bug bounty - and even for authorized engagements need to follow strict rules and procedures.

      Wonder if there are even legal steps that could be taken by Linux foundation.

    • You can read the (relatively short) email chains for yourself, but to try and answer your question, as I understood it the problem wasn't entirely the problems submitted in the paper it was followup bad patches and ridiculous defense. Essentially they sent patches that were purportedly the result of static analysis but did nothing, broke social convention by failing to signal that the patch was the result of a tool, and it was deemed indistinguishable from more attempts to send bad code and perform tests on the linux maintainers.

  • There is no separate real world distinct from academia. Saying that scientists and researchers whose job it is to understand and improve the world are somehow becoming "increasingly disconnected from the real world" is a pretty cheap shot. Especially without any proof or even a suggestion of how you would quantify that.

  • how is this different than blackhats contributing to general awareness of web security practices? Opensource considered secure just because its up on github is no different than plaintext HTTP GET params being secure just because "who the hell will read your params in the browser", which would be still the status quo if some hackers hadn't done the "lowest of the low " and show the world this lesson.

  • LKML should consider not just banning the @umn.edu on the SMTP but sinkholing the whole of University of MN network address space. Demand a public apology and paying for compute for the next 3 years or get yeeted

As a user of linux, I want to see this ban go further. Nothing from the University of MN, it's teaching staff, or it's current or past post-grad students.

Once they clean out the garbage in the Comp Sci department and their research committee that approved this experiment, we can talk.

I agree with most commenters here that this crosses the line of ethical research, and I agree that the IRB dropped the ball on this.

However, zooming out a little, I think it's kind of useful to look at this as an example of the incentives at play for a regulatory bureaucracy. Comments bemoaning such bureaucracies are pretty common on HN (myself included!), with specific examples ranging from the huge timescale of public works construction in American cities to the FDA's slow approval of COVID vaccines. A common request is: can't these regulators be a little less conservative?

Well, this story is an example of why said regulators might avoid that -- one mistake here, and there are multiple people in this thread promising to email the UMN IRB and give them a piece of their mind. One mistake! And when one mistake gets punished with public opprobrium, it seems very rational to become conservative and reject anything close to borderline to avoid another mistake. And then we end up with the cautious bureaucracies that we like to complain about.

Now, in a nicer world, maybe those emails complaining to the IRB would be considered valid feedback for the people working there, but unfortunately it seems plausible that it's the kind of job where the only good feedback is no feedback.

In Ireland there was a referendum to repeal the ban on abortion referendum there was very heated arguments, bot twitter accounts and general toxicity. For the sake of peoples sanity, there was a "Repeal Shield" implemented that blocked bad faith actors.

This news makes me wish to implement my own block on the same contributors to any open source I'm involved with. At the end of the day, their ethics is their ethics. Those ethics are not Linux specific, it was just the high profile target in this instance. I would totally subscribe to or link to a group sourced file similar to a README.md or CONTRIBUTORS.md (CODERS_NON_GRATA.md?) that pulled such things.

  • I think that is a sensible way to deal with this problem. The linux community is based on trust (as are a lot of other very successful communities), and ideally we trust until we have reason not to. But at that point we do need to record who we don't trust. It is the same in academia and sports.

    • The tech community, especially in sub-niches is far smaller than people think it is. It's easy to feel like it's a sea of tech to some when it's all behind a screen. But reputation is a powerful thing in both directions.

      There is also a more nuclear option which I'm specifically not advocating for quite yet here but I will note none the less;

      We're starting to see in discourse regarding companies co-opting open source projects for their own profit (cough Amazon) and how license agreements limit them more than regular contributors. That has come about, at the core of it, also because of a demonstrated trend of bad faith but also combined with a larger surface area contact with society. I could foresee a potential future trend where individuals who also act in bad faith are excluded from use of open source projects through their licenses. Imagine if the license for some core infrastructure tech like a networking library or the Linux kernel banned "Joe Blackhat" from using python for professional use. Now he still could, but in reputable companies, particularly larger ones with a legal department that person would be more of a liability than they are worth. There can be potentially huge professional consequences of a type that do not currently exist really in the industry.

I'd really like to review now similar patches in FreeRTOS, FreeBSD and such. Their messages and fixes all follow a certain scheme, which should be easy to detect.

At least both of them they are free from such @umn.edu commits with fantasy names.

@gregkh

These patches look like bombs under bridges to me.

Do you believe that some open source projects should have legal protection against such actors? The Linux Kernel is pretty much a piece of infrastructure that keeps the internet going.

Usually I am very skeptical of "soft" subjects like the humanities; but clearly this is unethical research.

In addition to wasting people's time, you are potentially messing with software that runs the world.

  • Considering how often you post about free speech and censorship, maybe you would find some interesting perspectives within the humanities.

They are rightfully worried about old commits? Maybe it's time they switched to a more secure language which can more easily detect malicious code. To be honest C seems critically insecure without a whole lot of work. If a bunch of experts even struggle, seems like they need better tools. Especially since Linux is so important, and there are a lot more threats, Rust seems like a good solution.

Apart from some perhaps critical unsafe stuff which should have a lot of attention, requiring everything to be safe/verified to some extent surely is the answer.

This was absolutely the right move. Smells really fishy given the history. I imagine this is happening in other parts of the community (attempting to add malicious code), albeit under a different context.

Is introducing bugs into computer systems on purpose like this in some way illegal in the USA? I understand that Linux is run by a ton of government agencies as well, would they take interest in this?

I don't see the difference between these and other 'hackers', white-hat, black-hat etc. The difference I see is the institution tested, Linux, is beloved here.

Usually people are admired here for finding vulnerabilities in all sorts of systems and processes. For example, when someone submits a false paper to a peer-reviewed journal, people around here root for them; I don't see complaints about wasting the time and violating the trust of the journal.

But should one of our beloved institutions be tested - now it's an outrage?

  • The outrage and does seem out of place to me. I think it's fair (even reasonable) for the kernel maintainers to ban those responsible, but I'm not sure why everyone here is getting so offended about fairly abstract harms like "wasting the time of the maintainers"

  • I don't think what has been done here is comparable to other forms of "finding vulnerabilities". Linux and everyone else would be happy if people find vulnerabilities in their code and report them back. And it is not like linux team is unaware of this "vulnerability"

    This is more comparable to DDOS ing a web server to test their capabilities of handling DDOS. And they are aware of the issue. And they told you to not do it when you did it before. You just don't waste other people's time/money like that unless they give you the permission.

CS department security research is near universally not held to be in the scope of IRBs. This isn't entirely bad: the IRB process that projects are subjected to is so broken that it would be a sin to bring that mess on any other things.

But it means the regularly 'security' research does ethically questionable stuff.

IRBs exist because of legal risk. If parties harmed by unethical computer science research do not litigate (or bring criminal complaints, as applicable) the university practices will not substantially change.

  • Security research has its own standards of ethics, and these researchers violated those standards.

    1. You don't conduct a penetration test without permission to do so, or without rules of engagement laying out what kinds of actions and targets are permitted. The researchers did not seek permission or request RoE; they tried to ask forgiveness instead.

    2. You disclose the vulnerabilities immediately to the software's developers, and wait a certain period before revealing the vulns to the public. While the researchers did immediately notify the kernel dev team in 3 cases, there's apparently another vulnerable commit that the researchers didn't mention in their paper and did not tell the kernel dev team about, which was still in the kernel as of the paper's publish date.

    Apparently the IRB team that reviewed this project decided that no permission was needed because the experiment was on software, not people--even though the whole thing hinged on human code review practices. It's evident that the IRB doesn't know how infosec research should be conducted, how software is developed, or how code review works, but it's also evident that the researchers themselves either didn't know or didn't care about best practices in infosec.

What an effing idiot! And then turn around and claiming bullying! At this point I’m not even surprised. Claiming victimhood is now a very effective move in the US academia these days.

Actually I do understand BOTH sides, BUT:

The way the university did this tests and the reactions afterwards are just bad.

What I see here and what the Uni of Minnesota seem to neglected is: 1. Financial damage (time is wasted) 2. Ethical reasons of experimenting with human beings

As a result, the University should give a clear statement on both and should donate a generous amount of on money for compensation of (1.)

For part (2.), a simple bit honest apology can do wonders!

---

Having said that, I think there are other and ethically better ways to achieve these measurement.

Researcher sends bogus papers to journal/conference, gets them reviewed and approved, uses that to point how ridiculous the review process of the journal is => GREAT JOB, PEER REVIEW SUCKS!

Researcher sends bogus patches to bazaar-style project, gets them reviewed and approved, uses that to point how ridiculous the review process of the project is => DON'T DO THAT! BAD RESEARCHER, BAD!

  • One potentially misleads readers of the journal, the other introduces security vulnerabilities into the world’s most popular operating system kernel.

    • "Misleading readers of a journal" might actually cause more damages to all of humanity (see https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt) than inserting a security vulnerability (that is likely not even exploitable) in a driver that no one actually enables (which is likely why no one cares about reviewing patches to it, either).

      Thought to be fair, it is also the case that only the most irrelevant journals are likely to accept the most bogus papers. But in both cases I see no reason not to point it out.

      The two situations are much more closer than what you think. The only difference I see is in the level of bogusness.

  • OK? If somebody else does something ethically dubious, does that make all ethically dubious behaviours acceptable somehow? How does a totally separate instance of ethical misconduct impact this situation?

I'm not surprised.

I'm repeating myself, but I'm pretty certain the NSA or other intel agencies (Israel, especially, considering their netsec expertise) have already done it in one way or another.

Do you remember the semicolon that caused a big wifi vuln? Hard to really know if it was just a mistake.

I'm going full paranoiac here, but anyway.

You can also imagine the NSA submitting patches to the windows source code, without the knowledge of microsoft, and so many other similar scenarios (android, apple, etc)

I think Greg KH would have been wise to add a time limit on this ban. Make it a 10-year block, for example, rather than one with no specific end-date.

Imagine what happens 25 years from now as some ground-breaking security research is being done at Minnesota, and they all groan: "Right, shoot, back in 2021 some dumb prof got us banned forever from submitting patches".

Is there a mechanism for University of Minnesota to appeal, someday? Even murders have parole hearings, eventually.

"It's just a prank, bro!"

Incredible that the university researches decided this was a good idea. Has noone in the university voiced concern that perhaps this is a bad idea?

plonk

Aaaaand into the kill file they go.

Been a while since I last saw a proper plonk.

  • Can you link to any others? Personal curiosity.

    • USENET is filled with them.

      People would reach a point where further conversation makes no sense.

      So, one would make a kill file entry, and plonk basically communicated that smack the carriage return, enter key with gratifying authority to the user who had earned their place in the kill file, not to be heard from again.

      The conversation is over, sort of like a block works today.

      Edit: See in the definition I linked where plonk is the sound of some poor soul hitting the bottom of a kill file? I think that is debatable, depending on perspective. The peeps who mentored me onto the net at the beginning explained it as that gratifying press of the CR/LF [ENTER] key.

      The sentiment is the same though.

      ---

      plonk /excl.,vt./

      [Usenet: possibly influenced by British slang `plonk' for cheap booze, or `plonker' for someone behaving stupidly (latter is lit. equivalent to Yiddish `schmuck')] The sound a newbie makes as he falls to the bottom of a kill file. While it originated in the newsgroup talk.bizarre, this term (usually written "plonk") is now (1994) widespread on Usenet as a form of public ridicule.

      ----

      This particular plonk is proper, not just as an insult, which is the general use case, because the person who earned the "plonking" did so in spectacularly stupid fashion, in the opinion of the "plonker."

      Total classic!

      On some older TTY's, the two asterisks denoted bold text too, here HN uses it for italics.

      Plain text would show the asterisks as the linked exchange showed to us.

Here’s a (perhaps naively) optimistic take: by publishing this research and showing it to lawmakers and industry leaders, it will sound alarms on a serious vulnerability in what is critical infrastructure for much of the tech industry and public sector. This could then lead to investment in mitigations for the vulnerability, e.g. directly funding work to proactively improve security issues in the kernel.

It seems like this debacle has created a lot of extra work for the kernel maintainers. Perhaps they should ask the university to compensate them.

I think the root of the problem can be traced back to the researcher's erroneous claim that "This was not human research".

Committing a non-volunteer of your experiment to work, and attempting to destroy their product of their work surely isn't ethical research.

And yesterday there was another bit of Linux news by Greg KH trending on Reddit. Nice to see him stepping into the spotlight more :)

If you really wanted to research how to get malicious code into the highest-profile projects like Linux, the social engineering bit would be the most

Whether some unknown contributor can submit a bad patch isn't so interesting for this type of project. Knowing the payouts for exploits, the question is: how much money would one bad reviewer want to let one past?

I have to question the true motivations behind this. Just a "mere" research paper? Or is it there an ulterior motive, such as undermining Linux kernel development, taking advantage of the perceived hostility of the LKML to make a big show of it; castigate and denounce those elitist Linux kernel devs?

So I hear tinfoil is on sale, mayhaps I should stock up.

Am I missing how these patches were caught/flagged? Was it an automated process or physically looking at the pull requests?

How is this any different to littering in order to research if it gets cleaned up properly? Or like dumping hard objects onto a highway to research if they cause harm before authorities notice it?

I mean, the Kernel is now starting to run in cars and even on Mars, and getting those bugs into stable is definitely no achievement one should be proud of.

Reminds me of the Tuskegee Symphilis Study.

Sure we infected you with Syphilis without asking for permission first, but we did it for science!

So, next paper would be like "On the Effectiveness of Using Email Domain Names for Kernel Submission Bans"

They just wasted the community's time. No wonder Linus Trovalds goes batshit crazy on these kind of people!

This type of research just looks like: let’s prove people will die if being killed, by really killing someone.

After they successfully got buggy patches in, did they submit patches to fix the bugs? And were they careful to make sure their buggy patches didn't make it into stable releases? If not, then they risked causing real damage, and is at least toeing the line of being genuinely malicious.

The tone of Aditya Pakki's message makes me think they would be very well served by reading 'How to Win Friends & Influence People' by Dale Carnegie.

This is obviously the complete opposite of how you should be communicating with someone in most situations let alone when you want something from them.

I have sure been there though so if anything, take this as a book recommendation for 'How to Win Friends & Influence People'.

  • His email reminds me the way politicians behaves in my country (India): play victim and start dunking.

  • I’ve seen this book mentioned a couple of times on HN now. I’m curious: did you learn about this book from the fourth season of the Fargo? This is where I encountered it first.

    • Not the person you're asking, but the book is over 80 years old and one of the best selling books of all time. Not exactly the same, but it's like asking where they heard about the Bible. It's everywhere.

      1 reply →

    • It's a common recommendation for many decades now, you aren't going to find any one particular vector.

    • I think it's just a common book to recommend people who seem to be lacking in the "social communication" department. I would know, I got it gifted to me when I was young, angsty and smug.

    • As others have stated it is everywhere. The title always scared me away from it a little, but then I saw it come by in the intro of Netflix’s “The Politician” and I thought I’d give it a chance. Especially after I found out how old it is.

    • The book is very famous - it launched the "self help" genra. I've never read it, but I've heard it is fairly shallow guide on manipulating people to get what you want out of them.

      3 replies →

Are they legally liable in any way for including deliberate flaws in a piece of software they know is widely used and therefore creating a surface attack surface for _any_ attacker with the skill to so do and putting private and public infrastructure at risk ?

Aditya Pakki should be banned from any open source projects. Open source depends on contributors who collectively try to do the right thing. People who purposely try to veer projects off course should face real consequences.

What a waste of talent... these kids know how to program, but instead of working on useful projects they’re wasting everyone’s time. It’s really troubling that any professor would have proposed or OK’d this.

The UMN had worked on a research paper dubbed "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits".

I guess it's not as feasible as they thought.

Let’s add to the question “what is the quality of code review process in Linux?” an other one “what is the quality of ethical review process at universities?”.

I think there should be a real world experiment to test it.

The most recent possible-double-free was from a bad static analyzer wasn't it? That could have been a good-faith commit, which is unfortunate given the deliberate bad-faith commits prior.

After reading many of the comments I agree with the decision to ban the University. Why? You are free to choose your actions. You are not free to choose the consequences of your actions.

I've been thinking, what would happen if someone intentionally hacked a university and erased all data from all their computer systems, and then lied to their faces about it?

New white paper due soon

This raises the question: "has there been state-sponsored efforts to overwhelm open source maintainers with the intent of sneaking in vulnerabilities to software applications?"

"We'd like to insert malicious code into the software that runs countless millions of computers and see if they figure it out"

I don't think this was the pitch they gave to their IRB.

The replies here have been fascinating to read. Yes it's bad that subterfuge was engaged in vs kernel devs. But don't the many comments here expressing outrage at the actions of these researchers sound exactly like the kind of outrage commonly expressed by those in power when their misdeeds are exposed? e.g. Republican politicians outraged at a "leaker" who has leaked details of their illegal activity. It honestly looks to me like the tables have been turned here. Surely the fact that the commonly touted security advantages of OSS have been shown to be potentially fictitious, is at least as worrying as the researchers' ethics breaches?

  • One very good security practice is that if you find that you have a malicious contributor, you fire that contributor. The "misdeeds" were committed by the UMN researchers, not by the Linux maintainers.

  • Vulnerabilities in OSS are fixed over time. They are fixed by people running the code and contributing back, by fuzzing efforts, by testing a release candidate.

    The difference between OSS and closed source is not the number of reviewers for the initial commit, it's the number of reviewers over years of usage.

I am baffled by the immaturity and carelessness of experimenting on a kernel that millions of critical machines use, and I applaud the maintainers for dealing swiftly with this.

Looks like vandalism masquerading as “research”.

Greg’s response is totally right.

I thought there were ethical standards for research where a good study should not knowingly do harm or at the very least make those involved aware of their participation

While it is easy to consider this a unsportsmanlike, one might view this as a supply chain attack. I don't particularly support this approach, but consider for a moment that as a defender (in the security team sense), you need to be aware of all possible modes of attack and compromise. While the motives of this class are clear, ascribing to attackers any particular motive is likely to miss.

To the supply chain type of attacks, there isn't an easy answer. Classical methods left both the SolarWinds and Codecov attacks in place for way too many days.

Could someone clarify: this made it to the stable branch, so does that mean that it made it out into the wild? Is there action required here?

A lot of people seem to consider this meaningless and a waste of time. If we disregard the the problems with the patches reaching stable branches for a second (which clearly is problematic), what is the difference between this and companies conducting red team exercises? It seems to me a potentially real and dangerous attack vector has been put under the spotlight here. Increasing awareness around this can't be all bad, particularly in a time where state sponsored cyber attacks are getting ever more severe.

Now I'm not one for cancel culture, but fuck these guys. Put their fuckin' names out there to get blackballed. Bunch of clowns.

So they A/B tested the kernel maintainers and got banned. What about the kernel security? Is the patch process getting improved?

Is getting reactions from HN also part of their experiment and should we expect our comments to be written about in their paper?

logged into my ancient hn account just to tell all of you that pentesting without permission from higher-ups is a bad idea

yes, this is pentesting

If the researchers desired outcome is more vigilance during patches and contributions I guess they might achieve that outcome?

Could have this happened also on other open source projects like FreeBSD, OpenBSD, etc or other popular open source software?

  • This is a really important question, and the way to answer it is for someone to try it.

Me thinks that If you hold a degree from the University of Minnesota it would be a good idea to let your university know what you think of this.

  • > it would be a good idea to let your university know what you think of this.

    Unless there's something particularly different about University of Minnesota compared to other universities, something tells me that they won't give a crap unless you're a donor.

  • Not a great selling point for the CS department.

    "Yes, we are banned from submitting patches to Linux due to past academic research and activities of our PhD students. However, we have a world-class program here."

  • I'm trying to figure out how to do that. How can I get my degree changed? Will the university of (anyplace) look at my transcript and let me say I have a degree from them without much effort? I learned a lot, and I generally think my degree is about as good as any other university. (though who knows what has changed since then)

    I'm glad I never contributed again as an alumni...

    • If it would be my univ, I'd send a personal email to the dean. https://cse.umn.edu/college/office-dean#:~:text=Dean%20Mosta....

      If enough grads do that, I would expect the university will do something about it, and that would send a message. It's about where the money comes from in the end; (tuition, grants, research partnerships etc) IMO none of these sources would be very happy about what might amount to defacement of public property and waste of the time of people that are working for the good of mankind by providing free tools(bicycle of the mind) to future generations.

      There is no novelty in this research; bad actors have been trying to introduce bad patches for as long as open source has been open.

      1 reply →

Well we get to look at the real results of this in realtime, as they get there whole organization banned from the kernel.

Does the University of Minnesota have an ethical review board or research ethics board? They need to be contacted ASAP.

They seem to be teaching social engineering. Using a young, possibly foreign student as a front is a classy touch.

  • The author of the patches, Aditya Pakki, is a second year PHD student as per his website https://adityapakki.github.io/about/

    He himself is to blame for submitting these kind of patches and claiming innocence. If a person as old as him can't figure out what's ethical and what's not, then that person deserves what comes out of actions like these.

Is there some tool that provides a nicer view of these types of threads? I find them hard to navigate and read.

To me it was akin to spotting volunteers cleaning up streets and, right after they passed, dumping more trash on the same street to see if they come and clean it up again. Low blow if you ask me.

Experiment: let's blow up the world to find out who might stop us so we can write a paper about it.

Their research could have been an advisory email or a blogpost for the maintainers without the nasty experiments. If they really cared for OSS they would have have collaborated with the maintainers and persuaded them to use their software tools for patch work. There is research for good of all and there is research for selfish gains. I am convinced this is the later.

It's funny. When someone like RMS or ESR or (formerly) Torvalds is "disrespectful" to open source maintainers, this is called "tough love", but when someone else does it, it's screamed about like it's some kind of high crime, with calls to permanently cancel access for all people even loosely related to the original offender.

  • I don't see how this is related. Being rude in tone, and wasting someone's time, are different things. You make it sound like they are the same.

    But the opposite of what you propose is true. The maintainers are annoyed by others wasting their time in other cases as well as in this case - it's coherent behavior. And in my opinion, it's sensible to be annoyed when someone wasted your time - be it by lazily made patches or by intentionally broken patches.

    • I'm not the one who is making them sound like the same thing. There are literally people in this thread, saying that "wasting time" is being "disrespectful" to the maintainers.

Make an ethics complaint with the state and get their certification and charter pulled.

  • That's a worse death sentence than SMU's for paying players. Even the NCAA didn't kill the school, just the guilty sport program. You're asking the state to pull entire university's charter for a rogue department? Sure, pull the CS department, but I'm sure the other schools at the university had absolutely zero culpibility.

    • As a graduate of the UMN, other departments have had their share of issues as well. When I was there they were trying to figure out how to deal with a professor selling medical drugs without FDA permission (the permission did exist in the past, and the drug probably was helpful, but FDA approval was not obtained).

      I suspect that all of the issues I'm aware of are within normal bounds for any university of that size. That is if kill the UMN you also need to kill Berkley, MIT, and Harvard for their issues of similar magnitude that we just by chance haven't heard about. This is a guess though, I don't know how bad things are.

      2 replies →

First thing that comes to mind is The Underhanded C Contest [0] where contestants try to introduce code that looks harmless, but actually is malicious and even if caught should look like an innocent bug at worse.

[0] http://www.underhanded-c.org

I wonder if they can be sued (by the Linux Foundation, maybe) for that...

Could this have just been someone trying to cover up being a mediocre programmer in academia by framing it in a lens that would work in the academy with some nonsense vaguely liberal arts sounding social experiment premise?

Is banning an entire university's domain from submitting to a project due to the actions of a few of its members an example of cancel culture?

  • If the university itself is actively promoting unethical behavior, then no, it isn't "cancel culture". That term is reserved for people or groups who hold unpopular opinions, and this is not that.

They should be reported to the authorities for attempting to introduce security vulnerabilities into software intentionally. This is not ok.

  • What authorities whould that be? The Department of Justice? The same DoJ that is constantly pushing for backdoors to encryption? Good luck with that! The "researchers" just might receive junior agent badges instead.

  • Maybe it was those very authorities who wanted them there. Lot's of things have gotten patched and the backdoors don't work as well as they used to... gotta get clever.

  • I'm a PhD student myself. What he did is not okay! We study computer science to do good not to harm.

  • What these researchers did was clearly and obviously wrong, but is it actually illegal?

    • It should be reported anyways. This might be only some small part of the malfeasance they're getting up to.

Uff da! I really do hope the administrators at University of Minnesota truly understand the gravity of this F* up. I doubt they will though.

Or some enemy state pawn(s) trying to add backdoors and then use the excuse of "university research paper" should they get caught?

This is the kind of study (unusual for CS) that requires IRB approval. I wonder if they thought to seek approval, and if they received it?

If it was up to me, I would

1) send ethics complaint to the University of Minnesota, and

2) report this to FBI cyber crime division.

so basically they demonstrated that the oss security model, as it operates today, is not working as it had been previously hoped.

it's good work and i'm glad they've done it, but that's depressing.

now what?

The full title is "Linux bans University of Minnesota for sending buggy patches in the name of research" and it seems to justify the ban. It's not as though these students were just bad programmers, they were intentionally introducing bugs, performing unethical experimentation on volunteers and members of another organization without their consent.

Unfortunately even if the latest submissions were sent with good intentions and have nothing to do with the bug research, the University has certainly lost the trust of the kernel maintainers.

  • The full titre should actually be "Linux bans University of Minnesota for sending buggy patches in the name of research and thinking they can add insult to injury by playing the victims"

    > I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

    > These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.

    > Obviously, it is a wrong step but your preconceived biases are so strong that you make allegations without merit nor give us any benefit of doubt. I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

    This idiot should be banned from the University, not from the linux kernel.

  • From the looks of the dialogue, it was all of the above with the addition of lying about what they were up to when confronted. I would think all of this constitutes a serious violation of any real university's research ethics standards.

I just want you to know that this is extremely unethical to create a paper where you attempt to discredit others by just using your university's reputation to try to create vulnerabilities on purpose.

I back your decision and fuck these people. I will additionally be sending a strongly worded email to this person, their advisor and their whoever's in charge of this joke of a computer science school. Sometimes I wish we had the ABA equivalent for computer science.

  • just write to irb@umn.edu and ask if this was a) reviewed and b) who approved it. It seems they have anyway violated the Human Research Protection Program Plan.

    The researchers should not have done this, but ultimately it's the faculty that must be held accountable for allowing this to happen in the first place. They are a for-profit institution and should not get away with harassing people who are contributing their personal time. So nail them to the proverbial cross but make sure the message is heard by those who slipped up (not the researchers who should have been stopped before it happened).

  • I completely disagree with this framing.

    A real malicious actor is going to be planted in some reputable institution, creating errors that look like honest mistakes.

    How do you test if the process catches such vulnerabilities? You do it the just the way that these researchers did.

    Yes, it creates extra homework for some people with certain responsibilities, that doesn't mean it's unethical. Don't shoot the messenger.

    • > A real malicious actor

      They introduced a real vulnerability in a codebase that lowers world-wide cybersecurity used by billions so they could jerk themselves off over a research paper.

      They are a real malicious actor and I hope they hit by the CFAA.

      3 replies →

    • No. There are processes to do such sorts of penetration testing. Randomly sending buggy commits or commits with security vulns to "test the process" is extremely unethical. The linux kernel team are not lab rats.

      19 replies →

    • It is unethical. You cannot experiment on people without their consent. Their own university has explicit rules against this.

The previous discussion seems to have suddenly disappeared from the front page:

https://news.ycombinator.com/item?id=26887670

  • Edit: actually it was standard moderation but in a bit of an unclear way - see https://news.ycombinator.com/item?id=26894033.

    We made a mistake. I'm not sure what happened but it's possible that we mistook this post for garden-variety mailing-list drama. A lot of that comes up on HN, and is mostly not interesting; same with Github Issues drama.

    In reality, this post is clearly above that bar—it's a genuinely interesting and significant story that the community has a ton of energy to discuss, and is well on topic. I've restored the thread now, and merged in the dupe that was on the front page in its stead.

    Sorry everybody! Our only priority is to serve what the community finds (intellectually) interesting, but moderation is guesswork and it's not always easy to tell what's chaff.

It's already being discussed on HN [1] but for some reason it's down to the 3rd page despite having ~1200 upvotes at the moment and ~600 comments, including from Greg KH. (And the submission is only 5 hours old.)

[1] https://news.ycombinator.com/item?id=26887670

  • Sorry, we got that wrong. Fixed now.

    Edit: turns out it was just that there were two different threads on the frontpage about this story and a moderator downweighted the earlier one. That's standard moderation. Usually we merge the threads (and I've since done so) but I'm the only mod who currently does that and I wasn't online yet.

  • This is another example of HN's front page submission getting aggressively moderated for no good reason. It's been happening a lot lately.

I wish the title were clearer. Linux bans University of Minnesota for sending buggy patches on purpose.

  • The term of art for an intentional bug that deliberately introduces a security flaw is a "trojan" (from "Trojan Horse", of course). UMN trojaned the kernel. This is indeed just wildly irresponsible.

Yes, and robbing a bank to show that the security is lax is totally fine because the real criminals don't notify you before they rob a bank.

Do you understand how dumb that sounds?

Since there is bound to be a sort of trust hierarchy in these commits, is it possible that bonafide name-brand university people/email addresses come with an imprimatur that has now been damaged generally?

Given the size and complexity of the Linux (/GNU) codeworld, I have to wonder if they are coming up against (or already did) the practical limits of assuring safety and quality using the current model of development.

I was expecting this to be about introducing strange bugs and then claiming to fix them in order to get a publication. But the publication is titled "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits"! So I guess it's less feasible than they imagined, at least in this instance.

lol this is also how Russia does their research with Solarwinds. Do not try to attack supply chain or do security research without permission. They should be investigated by FBI for doing recon to a supply chain to make sure they weren't trying to do something worse. Minnesota leads the way in USA embarrassment once again.

Think of potential downstream effects of a vulnerable patch being introduced into Linux kernel. Buggy software in mobile devices, servers, street lights... this is like someone introducing a bug into university grading system.

Someone should look into who sponsored this research. Was there a state agent?

The experiment is ridiculous and apathetic. You need consent to deal with this. They could've funded some internal project and have people at random submit commits and a control group that doesn't What they did is unethical to the max.

Good job on Greg for holding ground.

This would have been way more fun if they had a Black trans Womxn submit the bogus patches. The blowback to the White maintainer’s reply would have been hilarious * popcorn *

Ahh Minnesota... land of out-of-control and local government-supported rioting... so I guess shenanigans are expected.

Remember, the university of Minnesota was number 8 for top .edu addresses dumped in the Ashley Madison hack.

Scum of the earth.

The bad actors here should be expelled and deported. The nationalities involved make it clear this is likely a backfired foreign intelligence operation and not just 'research'.

They were almost certainly expecting an obvious bad patch to be reverted while trying to sneak by a less obvious one.

In other news: the three little pigs ban wolves after wolves exposed the dubious engineering of the straw house by blowing on it for a research paper.

  • So if an identifiable group messes with a project, but says "its for research!", then its OK? I'm just confused by your comment because it seems like you are upset with the maintainers for protecting their time from sources of known bad patches. And just... why? Where does the entitlement come from?

    • Being a maintainer is being a gate-keeper, by definition. Don't get me started about their "time", most of these guys are paid to work on the linux kernel, eg. Greg Kroah-Hartman is paid by the Linux Foundation. it's literally his job. Linus has balls, I'm afraid Greg KH is a Karen compared to him.

      Other than that, they got caught red-handed accepting shit patch and complain about ethical issues when the fault is entirely on their side for not doing their job properly.

      This whole thing points to a single question: how many times did they accept patch from black hat individuals who did not disclose their intention ?

      This question the Linux development security model and highlight it being insecure to such social engineering attacks and they still manage to play victims. That's pitiful... Own it, say you fucked up accepting the patch, don't blame other for your own incompetence.

      2 replies →

Whoa this is some heavy DC, a Chinese spy got busted trying to poison the Linux kernel. And then he came up with an excuse.

  • What evidence do you have that this is a spy? If you have evidence, you need to say what is in order to make a substantive post. If you have no evidence, then this comment is a smear and breaks the site guidelines badly. In that case please read https://news.ycombinator.com/item?id=26643049. This will get you banned here—we don't want this site to become nationalistic flamewar hell. No more of this please.

  • just because they chose to use Chinese names, doesnt make them less American. Are you suggesting non-Chinese Americans cant be spies?

I know this is going to be contentious, but a quick Google shows that

* both originated in China (both attended early university there)

* one appears to be on a student VISA (undergraduate BA in China, now working on PhD at UoM)

China doesn't allow its brightest and best to leave, without cause.

When I see research like this, it also makes me think of how "foolish" China sometimes views the West, and the rest of the world. Both for political reasons, eg to keep the masses under control, and due to a legitimate belief we all have in "we are right".

Frankly, whilst I have no personal animosity against someone working on behalf of what they see as right, for example, forwarding what they believe to be in the best interests of their country, and fellow citizens? I must still struggle against goals which are contrary to the same for my country, and my citizens.

Why all of the above?

Well, such things have been know for decades. And while things are heating up:

https://www.cbc.ca/news/politics/china-canada-universities-r...

"including the claim that some of the core technology behind China's surveillance network was developed in Canadian universities."

When one thinks of the concept? That a foreign power, uses your own research funding, research networks, resources, capabilities, to research weaponry and tools to destroy you?

Maybe China should scoff at The West.

And this sort of research is like pen testing, without direct political ramifications for China itself.

Yes, 100%, these two could have just been working within their own personal sphere.

They also could be working on research for China. Like how easily one can affect the kernel source code, in plain sight. And even, once caught, how to regain confidence of those "tricked".

dang: This post does not deserve to be flagged. Downvote? Sure! Flagged? I've seen far more contentious things stated, when referring to the NSA. And all I'm doing here is providing context, and pointing to the possible motivations of those involved.

Others kept stating "Why would the do this?!" and "Why would they be so stupid?".

Further, at the end I additionally validate that I am postulating, that 100% it certainly may not be the case. Only that I am speculating on a possible motivation.

Are we now not allowed to speculate on motive? If so, I wonder, how many other posts should be flagged.

For I see LOADS of people saying "They did this for reason $x".

Lastly, anyone believing that China is not a major security concern to the West, must be living under a rock. There are literally hundreds of thousands of news articles, reports, of the Chinese government doing just this.

Yet to mention it as a potential cause of someone's actions is.. to receive a flag?

  • This is unjustified xenophobia. And besides, if they were really trying to get bugs into the Linux kernel to further some nefarious goal, why would they publish a paper on it?

    Simplest explanation is that they just wanted the publication, not to blame it on CCP or the researchers' nationality.

    • As I said, the research is the goal. Acknowledging China's past behaviour, and applying it to potential present actions, is not xenophobia.

  • Talking about flagged posts: why are they so hard to read? If I don't want to read a flagged post, I simply won't read it. Why are you forcing me to not read it by coloring it that way?

  • >This post does not deserve to be flagged.

    You start with "I know this is going to be contentious", you know this is flamebait.

    • Why would you assume it is flamebait? The person knows they have an opinion that is at the edge of the conversation, which might invoke disagreement, and disclaims it up front?

I am concerned that the kernel maintainers might be falling into another trap: it is possible that some patches were designed such that they are legitimate fixes, and moreover such that reverting them amounts to introducing a difficult-to-detect malicious bug.

Maybe I'm just too cynical and paranoid though.

Presumably the next step is an attempt to cancel the kernel maintainers on account of some politically powerful - oops, I mean, some politically protected characteristics of the researchers.

> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

Woah, this attempt to incite wokes and cancellers is particularly pernicious.

  • Cancel Linux! Anyone?

    • One may wonder whether the repetitive attacks against Linus on the tone he’s using, until he had to take a break, isn’t a way to cut down Linux’ ability to perform by cutting its head, which would be absolutely excellent for closed-source companies and Amazon.

      Imagine: If Linux loses its agility, we may have to either “Use Windows because it has continued upgrades” or “purchase Amazon’s version of Linux” which would be the only ones properly maintained and thus certified for, say, government purpose or GDPR purpose.

      (I’m paying Debian but I’m afraid that might not be enough).

      1 reply →

It seems we hit another level of "progressive" stupidity and that has to be the most "woke" IT univ if they allow such BS to happen. Of course they deserve the ban. To be that stupid, self-centered and despise someone else's work to this extent, then play a victim - you need to be brainwashed by some university professor. Not to mention they confuse being friendly to beginners with tolerance for brainless parasites.

They exploited authority of educational institution and everyone SANE who is studying there to intentionally break someone's else work for their own profit.

Not sure what's severity of this attack but if these "patches" got into critical parts of kernel like NFS they should not only be expelled but prosecuted. Becase what's next? Another bunch of MORONS will launch attacks on medical equipement to see if they're able to kill someone and then cry if they fail?

  • This seems like a stretch. While the main culprit is couching their accusations of slander in accessibility-oriented language as a way to deflect, there’s little to suggest “wokeness” is at play here in any respect, and to imply otherwise kind of gives away that you’ve already settled on a “culture war” lens, regardless of how well that maps to the story’s context.

Academic reputation has always mattered, but I can't recall the last time I've seen an example as stark as "I attend a university that is forbidden from submitting patches to the Linux kernel."

Somebody should have told them that since microsoft is now pro-open source this wouldnt land any of them a cushy position after the blowup at uni.

This is ridiculously unethical research. Despite the positive underlying reasons treating someone as a lab rat (in this case maintainers reviewing PRs) feels almost sociopathic.

  • > Despite the positive underlying reasons

    I think that is thinking too kind of them. Sociopaths are often very well-versed to give "reasons" about what they do, but at the core it is powerplay.

From an infosec perspective, I think this is a knee-jerk response to someone attempting a penetration test in good faith and failing.

The system appears to have worked, so that's good news for Linux. On the other hand, now that the university has been banned, they won't be able to find holes in the process that may remain, that's bad news for Linux.

  • Is it in good faith when they were already told explicitly to not continue? That's the point where it becomes intentionally malicious IMO

When James O' Keefe tries to run a fake witness scam on the Washington Post, and the newspaper successfully detects it, the community responds with "Well played!"

When a university submits intentionally buggy patches to the Linux Kernel, and the maintainers successfully detect it, the community responds with "That was an incredibly scummy thing to do."

I sense a teachable moment, here.

  • Being a Linux Kernel maintainer is a thankless job. Being a Washington Post journalist is nothing more than doing Bezos' bidding and dividing the country in the name of profit.

Seems to me they exposed a vulnerability in the way code is contributed.

If this was Facebook and their response was: > ~"stop wasting our time" > ~"we'll report you" the responses here would be very different.

Commenters have been reasonably accusing the researchers of bad practice, but I think there's another possible take here based on Hanlon's razor: "never attribute to malice that which is adequately explained by stupidity".

If you look at the website of the PhD student involved [1], they seem to be writing mostly legitimate papers about, for example, using static analysis to find bugs. In this kind of research, having a good reputation in the kernel community is probably pretty valuable because it allows you to develop and apply research to the kernel and get some publications/publicity out of that.

But now, by participating in this separate unethical research about OSS process, they've damaged their professional reputation and probably setback their career somewhat. In this interpretation, their other changes were made in good faith, but now have been tainted by the controversial paper.

[1] https://qiushiwu.github.io/

  • I suppose it depends on what you make of Greg's opinion (I am only vaguely familiar with this topic, so I have none).

    > They obviously were _NOT_ created by a static analysis tool that is of any intelligence, as they all are the result of totally different patterns, and all of which are obviously not even fixing anything at all. So what am I supposed to think here, other than that you and your group are continuing to experiment on the kernel community developers by sending such nonsense patches?

    Greg didn't think that the static analysis excuse could be legitimate as the quality was garbage.

Researcher(s) shows that it's relatively not hard to introduce bugs in kernel

HN: let's hate researcher(s) instead of process

Wow.

Assume good faith, I guess?

  • The concept of the research is quite good. The way this research was carried out, is downright unethical.

    By submitting their bad code to the actual Linux mailing list, they have made Linux kernel developers part of their research without their knowledge or consent.

    Some of this vandalism has made it down into the Linux kernel already. These researchers have sabotaged other people's software for their personal gain, another paper to boast about.

    Had this been done with the developers' consent and with a way to pull out the patches before they actually hit the stable branches, then this could have been a valuable research. It's the way that the research was carried out that's the problem, and that's why everybody is hating on the researches (rather than the research matter itself).

    • To provide some parallel on how the research was carried about:

      I see it as similar to

      - allowing recording of people without their consent (or warrant),

      - experimenting on PTSD by inducing PTSD without people consent,

      - or medical experimentation without the subject consent.

      And the arguments about not having anyone know:

      Try to introduce yourself in the White House and when you get caught tell them "I was just testing your security procedures".

      2 replies →

  • Wasting the time of random open source maintainers who have not consented to your experiment to try to get your paper published is highly unethical; I don't see why this is a bad faith interpretation.

  • There are two separate issues with this story.

    One is that what the researchers did is beyond reckless. Some of the bugs they've introduced could be affecting real world critical systems.

    The other issue is that the research is actually good in proving by practical means that pretty much anyone can introduce vulnerabilities into software as important and sensitive as the Linux kernel. This hurts the industry confidence that we can have secure systems even more than it already is.

    While some praise may be appropriate for the latter, they absolutely deserve the heat they're getting for the former. There may be many better ways to prove a point.

  • It is not hard to point a gun at someone's head.

    But let's assume your girlfriend points an (unknown to you) empty gun at your head, because she wants to know how you will react. What do you think is the appropriate reaction?

  • With that logic you can conduct research on how easy it is to rob elderly people in the street, inject poison in supermarket yogurts, etc.

I don't like this university ban approach.

Universities are places with lots of different students, professors, and different people with different ideas, and inevitably people who make bad choices.

Universities don't often act with a single purpose or intent. That's what makes them interesting. Prone to failure and bad ideas, but also new ideas that you can't do at corporate HQ because you've got a CEO breathing down your neck.

At the University of Minnesota there's 50k+ students at the Twin Cities campus alone, 3k plus instructors. Even more at other University of Minnesota campuses.

None of those people did anything wrong. Putting the onus on them to effect change to me seems unfair. The people banned didn't do anything wrong.

Now the kernel doesn't 'need' any of their contributions, but I think this is a bad method / standard to set to penalize / discourage everyone under an umbrella when they've taken no bad actions themselves.

Although I can't put my finger on why, this ban on whole swaths of people in some ways seems very not open source.

The folks who did the thing were wrong to do so, but the vast majority of people now impacted by this ban didn't do the thing.

  • It sends a strong message - universities need to make sure their researchers apply ethics standards to any research done on software communities. You can't ignore ethics guidelines like consent and harm just because it's a software community instead of a meatspace community. I doubt the university would have taken any action at all without such a response.

    • Has the university taken action yet? All I heard was after blowback, UMN had their institutional review board retroactively review the paper. They investigated themselves and found no wrongdoing. (IRB concluded this was not human research)

      UMN hasn't admitted to any wrongdoing. The professor wasn't punished in any form whatsoever. And they adamantly state that their research review processes are solid and worked in this case.

      An indefinite ban is 100% warranted until such a time that UMN can demonstrate that their university sponsored research is trustworthy and doesn't act in bad faith.

  • > I don't like this university ban approach.

    I do, because the university needs to dismiss everyone involved, sever their connections with the institution, and then have a person in a senior position email the kernel maintainers with news that such has taken place. At which time the ban can be publicly lifted.

    • I think the ban hits the right institution, but I'd reason the other way around: is it really the primary fault of the individual (arguably somewhat immature, considering the tone of the email) PhD Student? The problem in academia is not "bad apples", but problematic organizational culture and misaligned incentives.

      3 replies →

  • > The people banned didn't do anything wrong.

    There are ways to do research like this (involve top-level maintainers, prevent patches going further upstream etc.), just sending in buggy code on purpose, then lying about where it came from, is not the way. It very much is wrong in my opinion. And like some other people pointed out, it could quite possibly be a criminal offense in several jurisdictions.

    • >There are ways to do research like this (involve top-level maintainers, prevent patches going further upstream etc.)

      This is what I can't grok. Why would you not contact GKH and work together to put a process in place to do this in an ethical and safe manner? If nothing else, it is just basic courtesy.

      There is perhaps some merit to better understanding and avoiding the introduction of security flaws but this was not the way to do it. Boggles the mind that this group felt that this was appropriate behavior. Disappointing.

      As far as banning the University, that is precisely the right action. This will force the institution to respond. UMN will have to make changes to address the issue and then the ban can be lifted. It is really the only effective response the maintainers have available to them.

  • It's not a ban on people, it's a ban on the institution that has demonstrated they can't be trusted to act in good faith.

    If people affilated with the UMN want to contribute to the Linux kernel, they can still do that on a personal title. They just can't do it as part of UMN research, but given that UMN has demonstrated they don't have safeguards to prevent bad faith research, that seems reasonable.

  • I am writing this as someone who is very much "career academic". I am all on board with banning the whole university (and reconsidering the ban once the university shows they have some ethics guidelines in place). This research work should not have passed ethics review. On the other hand, it sounds preposterous that we even would need formal ethics review for CS research... But this "research" really embodies the whole "this is why we can not have nice things" attitude.

  • A university-wide ban helps in converting the issue into an internal issue of that university. The university officials will have to figure out what went wrong and rectify.

    • Probably not because nobody else at the university is affected, and probably won't be for a dozen more years when someone else happens to get interested in something. Even in CS there are a ton of legitimate projects to work on, so a ban on just one common one isn't going to be noticed without more attention.

      Note that I suspect enough people have gotten notice by the press now.

  • > None of those people did anything wrong. Putting the onus on them to effect change to me seems unfair. The people banned didn't do anything wrong.

    Some of the people banned didn't do anything wrong. Others tried to intentionally introduce bugs into the kernel. Their ethics board allowed that or was mislead by them. Obviously they are having serious issues with ethics and processes.

    I'm sure the ban can be reversed if they can plausibly claim they've changed. Since this was apparently already their second chance and they've been reported to the university before and the university apparently decided not to act on that complaint ... I have some doubts that "we've totally changed. This time we mean it" will fly.

  • I understand where this is coming from, and empathize with this but also empathize with the Kernel.org folx here. I think I'm okay with this because it isn't some government actor.

    It is not always easy to identify who works for who at a university in regards to someone's research. The faculty member who seems to be directing this is identifiable, obviously. But it is not so easy to identify anyone acting on his behalf - universities don't maintain public lists of grad or undergrad students working for an individual faculty member. Ad in that there seems to be a pattern of obfuscating these patches through different submission accounts specifically to hide the role of the faculty advisor (my interpretation of what I'm reading).

    Putting the onus on others is unfair...but from the perspective of Kernel.org, they do not know who from the population is bad actors and who isn't. The goal isn't to penalize the good folks, the goal is to prevent continued bad behavior under someone elses name. Its more akin to flagging email from a certain server as spam. The goal of the policy isn't to get people to effect change, its to stop a pattern of introducing security holes in critical software.

    It is perfectly possible that this was IRB approved, but that doesn't necessarily mean the IRB really understood the implications. There are specific processes for research involving deception and getting IRB approval for deception. but there is no guarantee that IRB members have the knowledge or experience with CS or Open Source communities to understand what is happening. The backgrounds of IRB members vary enormously...

  • The University of Minnesota IRB never should have approved this research. So this is an institutional level problem. This is not just a problem with some researchers.

    It's unfortunate that many people will get caught up in this ban that had nothing to do with it, but the university deserves to take a credibility hit here. The ball is now in their court. They need to either make things right or suffer the ban for all of their staff and students.

  • Agree that universities don't (and shouldn't) act with a single purpose or intent, but they need to have institutional controls in place that prevent really bad ideas from negatively affecting the surrounding community. Those seem to be lacking in this case, and in their absence I think the kernel maintainers' actions are entirely justified.

  • I don't like it either but it's not as bad as it sounds: the ban almost certainly isn't enforced mindlessly and with no recourse for the affected.

    I'm pretty sure that if someone from the University of Minnesota would like to contribute something of value to the Linux kernel, dropping a mail to GregKH will result in that being possible.

  • It's definitely killing a mosquito with a nuke, but what are the alternatives? The kernel maintainers claim these bogus commits already put too much load on their time. I understand they banned the whole university out of frustration and also because they simply don't have the time to deal with them in a more nuanced way.

    • There's a real cost. What's your estimate for going through each of these 190 patches individually, looking at the context of the code change, and whether the "ref counting or whatever" bug fix is real, and doing some real testing to ensure that?

      https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

      That looks like quite some significant effort. Now if most of those fixes were real, now after the revert there will be 190 known bugs in the kernel, before it's all cleaned up. That may have some cost too.

      Looks like a large and expensive mess someone other than that UNI will have to clean out, because they're not trustworthy, ATM.

    • Are they even killing a mosquito?

      Someone wants to introduce bugs, they can.

      Meanwhile lots of people banned for some other person's actions.

      1 reply →

I don't quite understand the outrage. Quite sure most HN readers were doing/involved in similar experiments one way or another. Isn't A/B testing an experiment on consumers (people) without their consent?

  • There is a sea of difference between A/B testing your own property, and maliciously introducing a bug on a critical piece of software that's running on billions of devices.

    • >> https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

      "We did not introduce or intend to introduce any bug or vulnerability in the Linux kernel. All the bug-introducing patches stayed only in the email exchanges, without being adopted or merged into any Linux branch, which was explicitly confirmed by maintainers. Therefore, the bug-introducing patches in the email did not even become a Git commit in any Linux branch. None of the Linux users would be affected."

      4 replies →

  • Isn't a/b testing usually things like changing layout or two things that....work as opposed to bugs?

So many comments here refrain, “They should have asked for consent first”. But would not that be detrimental to the research subject? Specifically, stealthily introducing security vulnerabilities. How should a consent request look to preserve the surprise factor? A university approaches you and says, “Would it be okay for us to submit some patches with vulnerabilities for review, and you try and guess which ones are good and which ones have bugs?” Of course you would be extra careful when reviewing those specific patches. But real malicious actors would be so kind and ethical as to announce their intentions beforehand.

  • It could have been done similar to how typosquatting research was done for ruby and python packages. The owners of the package repositories were contacted, and the researchers waited for approval before starting. I wasn't a fan of that experiment either for other reasons, but hiding it from everyone isn't the only option. Also, "you wouldn't have allowed me to experiment on you if I'd asked first" is a pretty disgusting attitude to have.

    • "you wouldn't have allowed me to experiment on you if I'd asked first"

      I'm shocked the researchers thought this wasn't textbook a violation of research ethics - we talk about the effects of the Tuskegee Study on the perception of the greater scientific community today.

      This is a smaller transgression that hasn't resulted in deaths, but when it's not difficult to have researched ethically AND we now spend the time to educate on the importance of ethics, it's perhaps more frustrating.

  • >So many comments here refrain, “They should have asked for consent first”.

    The Linux kernel is a very large space with many maintainers. It would be possible to reach out to the leadership of the project to ask for approval without notifying maintainers and have the leadership announce "Hey, we're going to start allowing experiments on the contribution process, please let us know if you'd like to opt out", or at least work towards creating such a process to allow experiments on maintainers/commit approval process while also under the overall expectation that experiments may happen but that *they will be reverted before they reach stable trees*.

    The way they did their work could impact more than just the maintainers and affect the reputation of the Linux project, and to me it's very hard to see how it couldn't have been done in a way that meets standards for ethical research.

  • Well, yeah, but the priority here shouldn't be to allow the researchers to do their work. If they can't do their research ethically then they just can't do it; too bad for them.

  • Yeah we get to hold people who are claiming to act in good faith to a higher standard than active malicious attackers. Their actions do not comport with ethical research practices.

  • Ethics in research matters. You don't see vaccine researchers shooting up random unconsenting people from the street with latest vaccine prototypes. Researchers have to come up with a reasonable research protocol. Just because the ethical way to do what UMN folks intended to do isn't immediately obvious to you - doesn't mean that it doesn't exist.

Someone does voluntary work and people think that gives them some ethical privilege to be asked before someone puts their work to the test? Sure it would be nice to ask but at the same time it renders the testing useless. They wanted to see how the review goes if they aren't aware that someone is testing them. You cant do this with consent.

The wasting time argument is nonsense too its not like they did this thousands of times and beside that, reviewing a intentional bad code is not wasting time is just as productive as reviewing "good" code and together with the patch-patch it should be even more valuable work. It not only or adds a patch it also make the reviewer better.

Yeah it aint fun if people trick you or point out you did not succeed in what you tried to do. But instead of playing the victim an play the unethical human experiment card maybe focus on improving.

  • > They wanted to see how the review goes if they aren't aware that someone is testing them. You cant do this with consent.

    Ridiculous. Does the same apply to pentesting a bank or a government agency. If you wanted to pentest these of course you'd get approval from an executive that has power to sanction this. Why would Linux development be an exception? Just ask GKH or someone to allow you to do this.

    • Ridiculous comparison indeed. There was no pen testing going on. Submitted code does not attack or harming any running system and whoever uses is does so completely voluntary. I dont need anyone's approval for that. The license already states that I'm not liable in any way for what you do with it.

  • Or you could cease to do the voluntary work for them, because they clearly are not contributing to your goals. This is what the kernel maintainers have chosen and they have just as much right to do so. And you can perfectly well do this with consent, there's a wealth of knowledge from psychology and sociology on how you can run tests on people with consent and without invalidating the test.

    • I never said they can not stop reviewing the code. They can do whatever the heck they want. I'm not gonna tell a volunteer what they can and can not do. They just as much dont need anyone's consent to ignore submits as thous who submitting dont need their consent. Its voluntary, if you dont see a benefit you are free to stop, not free to tell other volunteers what to do and not to do.

  • A far better approach would be to study patch submissions and see how many bugs were introduced by the result of those patches being accepted and applied, without any interference of any kind.

    Problem with that is it's a lot of work and they didn't want to do it in the first place.

  • Agreed, in fact the review process worked and now they are going to ban all contributions from that university, as it should be. I think it all worked out perfectly

    • Pathetic, it did not work at all, they told em whenever they missed a planted bug.

  • > Someone does voluntary work and people think that gives them some ethical privilege to be asked before someone puts their work to the test?

    Yes. Someone sees the work provided to the community for free and thinks that gives them some ethical privilege to put that work to the test?