← Back to context

Comment by dataflow

4 years ago

This is funny, but not at all a good analogy. There's obviously not remotely as much public interest or value in testing the security of this professor's private home to justify invading his privacy for the public interest. On the other hand, if he kept dangerous things at home (say, BSL-4 material), then his house would need 24/7 security and you'd probably be able to justify testing it regularly for the public's sake. So the argument here comes down to which extreme you believe the Linux kernel is closer to.

> This is funny, but not at all a good analogy

Yeah, for one thing, to be a good analogy, rather than lockpicking without entering when he’s not home and leaving a note, you’d need to be an actual service worker for a trusted home service business and use that trust to enter when he is home, conduct sabotage, and not say anything until the sabotage is detected and traced back to you and cited in his cancelling the contract with the firm for which you work, and then cite the “research” rationale.

Of course, if you did that you would be both unemployed and facing criminal charges in short order.

  • Your strawman would be more of a steelman if you actually addressed the points I was making.

Everyone has been saying "This affects software that runs on billions of machines and could cause untold amounts of damage and even loss of human life! What were the researchers thinking?!" and I guess a follow-up thought, which is that "Maintainers for software that runs on billions of machines, where bugs could cause untold amounts of damage and even loss of human life didn't have a robust enough system to prevent this?" never occurs to anyone. I don't understand why.

  • It's occurred to absolutely everyone. What doesn't seem to have occurred to many people is that there is no such thing as a review process robust enough to prevent malicious contributions. Have you ever done code review for code written by mediocre developers? It's impossible to find all of the bugs without spending 10x more time than it would take to just rewrite it from scratch yourself. The only real alternative is to not be open source at all and only allow contributions from people who have passed much more stringent qualifications.

    There is no such thing as a process that can compensate for trust mechanisms. Or if you want to view it that way, ignoring the university's protests and blanket-banning all contributions made by anybody there with no further investigation is part of the process.

  • People are well aware of theoretical risk of bad commits by malicious actors. They are justifiably extremely upset that someone is intentionally changing this from a theoretical attack to a real life issue.

    • I'm not confused about why people are upset at the researchers that introduced bugs and did it irresponsibly. I'm confused about why people aren't upset that an organization managing critical infrastructure is so under prepared at dealing with risks posed by rank amateurs, which they should've known about and had a mechanism of dealing with for years.

      What this means is that anyone who could hijack a university email account, or could be a student at a state university for a semester or so, or work at a FAANG corporation could pretty much insert backdoors without a lot of scrutiny in a way that no one detects, because there aren't robust safeguards in place to actually verify that commits don't do anything sneaky beyond trusting that everyone is acting in good faith because of how they act in a code review process. I have trouble understanding the thought process that ends up basically ignoring the maintainers' duty to make sure that the code being committed doesn't endanger security or lives because they assumed that everything was 'cool'. The security posture in this critical infrastructure is deficient and no one wants to actually address it.

      2 replies →

    • I remember a true story (forget by whom) where the narrator set up a simple website for some local community activity. A stranger hacked and defaced the website, admitted to doing so without revealing his identity. His position in contacting the author of the website was, "I did you a favor (by revealing how vulnerable it was)." The person telling the story reacted, "yes, but... you were the threat you're warning me of." It didn't result in the author recreating the site on a more secure platform, it only resulted in him deciding it was not worth the trouble to provide this free service any longer.

It wasn't intended to be serious. But on the other hand, he has now quite openly and publicly declared himself to be part of a group of people who mess around with security related things as a "test".

He shouldn't be surprised if it has some unexpected consequences to his own personal security, like some unknown third parties porting away his phone number(s) as a social engineering test, pen testing his office, or similar.

There's also not nearly as much harm as there is in wasting maintainer time and risking getting faulty patches merged.