← Back to context

Comment by testplzignore

4 years ago

It may be unethical from an academic perspective, but I like that they did this. It shows there is a problem with the review process if it is not catching 100% of this garbage. Actual malicious actors are certainly already doing worse and maybe succeeding.

In a roundabout way, this researcher has achieved their goal, and I hope they publish their results. Certainly more meaningful than most of the drivel in the academic paper mill.

It more shows up a very serious problem with the incentives present in scientific research and a poisonous culture which obviously seems to reward malicious behavior. Science enjoys a lot of freedom and trust from citizens but this trust must not be misused. If some children playing throw fireworks under your car, or mix sugar into the gas tank, just to see how you react, this would have negative community effects, too. Adult scientists should be totally aware of that.

This will lead in effect to that even valuable contributions from universities will be seen with more suspicion and will be very damaging in the long run.

>It shows there is a problem with the review process if it is not catching 100% of this garbage

What review process catches 100% garbage? It's a mechanism to catch 99% of garbage -- otherwise Linux kernel would have no bugs.

  • It does raise questions though. Should there be a more formal scrutiny process for less trusted developers? Some kind of background check process?

    Runs counter to how open source is ideally written, but for such a core project, perhaps stronger checks are needed.

    • These researchers were in part playing on the reputation of their university, right? Now people at that university are no longer trusted. I'm not sure a more formal scrutiny process will bring about better results, I think it would be reasonable to see if the university ban is sufficient to discourage similar behavior in the future.

I'm not sure what we learned. Were we under the impression that it's impossible to introduce new (security) bugs in Linux?

  • > Were we under the impression that it's impossible to introduce new (security) bugs in Linux?

    I've heard it many times that they're thoroughly reviewed and back doors are very unlikely. So yes, some people were under the impression.

The paper indicates that the goal is to prove that OSS in particular is vulnerable to this attack, but it seems that any software development ecosystem shares the same weaknesses. The choice of an OSS target seems to be one of convenience as the results can be publicly reviewed and this approach probably avoids serious consequences like arrests or lawsuits. In that light, their conclusions are misleading, even if the attack is technically feasible. They might get more credibility if they back off the OSS angle.

  • Not really. You can't introduce bugs like this into my companies code base because the code is protected from random people on the internet accessing it. So your first step would be to find an exploitable bug in github, but then you are bypassing peer review as well to get in. (Actually I think we would notice that, but that is more because of a process we happen to have that most don't)

> It shows there is a problem with the review process if it is not catching 100% of this garbage.

Does that add anything new to what we know since the creation of the "obfuscated C contest" in 1984?

> It shows there is a problem with the review process if it is not catching 100% of this garbage.

It shows nothing of the sort. No review process is 100% foolproof, and opensource means that everything can be audited if it is important to you.

The other option is closed source everything and I can guarentee that review processes let stuff through, even if its only "to meet deadlines" and you will unlikely be able to audit it.

Unable to follow the kernel thread (stuck in an age between twitter and newsgroups, sorry), but...

did these "researchers" in any way demonstrate that they were going to come clean about what they had done before their "research" made to anywhere close to release/GA?

By your logic, you allow recording people without their consent, experimenting on PTSD by inducing PTSD without people consent, or medical experimentation without the subject consent.

Try to introduce yourself in the White House and when you get caught tell them "I was just testing your security procedures".