← Back to context

Comment by walrus01

4 years ago

> I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018

New plan: Show up at Liu's house with a lock picking kit while he's away at work, pick the front door and open it, but don't enter. Send him a photo, "hey, just testing, bro! Legitimate security research!"

If they wanted to do security research, they could have done so in the form of asking the reviewers to help; send them a patch and ask 'Is this something you would accept?', instead of intentionally sending malicious commits and causing static on the commit tree and mailing lists.

  • Even better

    Notify someone up the chain that you want to submit malicious patches, and ask them if they want to collaborate.

    If your patches make it through, treat it as though they essentially just got red teamed, everyone who reviewed it and let it slip gets to have a nervous laugh and the commit gets rejected, everyone having learned something.

    • Exactly what I was thinking. This should have been set up like a normal pen test, where only seniors very high up the chain are in on it.

    • I wonder if informing anyone of the experiment would be frowned upon as it might affect the outcome? However, this research doesn’t appear to be fastidious about scientific integrity so maybe you are right.

  • Wouldn't that draw more attention to the research patches, compared to a "normal" lkml patch? If you (as a maintainer) expected the patch to be malicious, wouldn't you be extra careful in reviewing it?

    • You probably can learn more and faster about new drugs by testing them in humans rather than rats. However, science is not above ethics. That is a lesson history has taught us in the most unpleasant of ways.

    • You don't have to say you are studying the security implications, you could be say you are studying something else like turn around time for patches, or level of critique, or any number of things.

      2 replies →

  • Dd they keep track of and submit a list of additions to revert after they managed to get it added?

    From the looks of it they didn't even when it was heading out to stable releases?

    That's just using the project with no interest in not causing issues.

    • Yeah, so an analogy would be to put human feces into food and then see if the waiter is going to actually give it to the dinning customer. And then if they do, just put a checkmark on a piece of paper and then leave without warning someone that they're about to eat poop.

This is funny, but not at all a good analogy. There's obviously not remotely as much public interest or value in testing the security of this professor's private home to justify invading his privacy for the public interest. On the other hand, if he kept dangerous things at home (say, BSL-4 material), then his house would need 24/7 security and you'd probably be able to justify testing it regularly for the public's sake. So the argument here comes down to which extreme you believe the Linux kernel is closer to.

  • > This is funny, but not at all a good analogy

    Yeah, for one thing, to be a good analogy, rather than lockpicking without entering when he’s not home and leaving a note, you’d need to be an actual service worker for a trusted home service business and use that trust to enter when he is home, conduct sabotage, and not say anything until the sabotage is detected and traced back to you and cited in his cancelling the contract with the firm for which you work, and then cite the “research” rationale.

    Of course, if you did that you would be both unemployed and facing criminal charges in short order.

    • Your strawman would be more of a steelman if you actually addressed the points I was making.

  • Everyone has been saying "This affects software that runs on billions of machines and could cause untold amounts of damage and even loss of human life! What were the researchers thinking?!" and I guess a follow-up thought, which is that "Maintainers for software that runs on billions of machines, where bugs could cause untold amounts of damage and even loss of human life didn't have a robust enough system to prevent this?" never occurs to anyone. I don't understand why.

    • It's occurred to absolutely everyone. What doesn't seem to have occurred to many people is that there is no such thing as a review process robust enough to prevent malicious contributions. Have you ever done code review for code written by mediocre developers? It's impossible to find all of the bugs without spending 10x more time than it would take to just rewrite it from scratch yourself. The only real alternative is to not be open source at all and only allow contributions from people who have passed much more stringent qualifications.

      There is no such thing as a process that can compensate for trust mechanisms. Or if you want to view it that way, ignoring the university's protests and blanket-banning all contributions made by anybody there with no further investigation is part of the process.

    • People are well aware of theoretical risk of bad commits by malicious actors. They are justifiably extremely upset that someone is intentionally changing this from a theoretical attack to a real life issue.

      4 replies →

  • It wasn't intended to be serious. But on the other hand, he has now quite openly and publicly declared himself to be part of a group of people who mess around with security related things as a "test".

    He shouldn't be surprised if it has some unexpected consequences to his own personal security, like some unknown third parties porting away his phone number(s) as a social engineering test, pen testing his office, or similar.

  • There's also not nearly as much harm as there is in wasting maintainer time and risking getting faulty patches merged.

Put a flaming bag of shit on the doorstep, ring the doorbell, and write a paper about the methods Liu uses to extinguish it?

I wouldn't be surprised if the good, conscientious members of the UMN community showed up at his office (or home) door to explain, in vivid detail, the consequences of doing unethical research.

The actual equivalent would be to steal his computer, wait a couple days to see his reaction, get a paper published, then offer to return the computer.