← Back to context

Comment by andrewzah

4 years ago

Well, if there's no process, then it's not ethical (and sometimes, not legal) to purposefully introduce bad commits or do things like that. You need consent.

Firstly, it accomplishes nothing. We already all know that PRs and code submission is a potential vector for buggy code or security vulnerabilities. This is like saying water is wet.

Secondly, it wastes the time of the people working on the linux kernel and ruins the trust of code coming from the university of minnesota.

All of this happened due to caring about one's own research more than the ethics of doing this sort of thing. And continuing to engage in this behavior after receiving a warning.

First of all, whether something is ethical is an opinion, and in my opinion, it is not unethical.

Even if I considered it unethical, I would still want this test to be performed, because I value kernel security above petty ideological concerns.

If this is illegal, then I don't think it should be illegal. There's always debates about the legality of hacking, but there's no doubt that many illegal (and arguably unethical) acts of hacking have improved computer security. If you remember the dire state of computer security in the early 2000s, remember that the solution was not throw all the hacker kids in jail.

  • > I would still want this test to be performed, because I value kernel security above petty ideological concerns.

    The biggest issue around this is consent. You can totally send an email saying "we're doing research on the security implications of the pull request process, can we send you a set of pull requests and you can give up approve/deny on each one?"

    > If you remember the dire state of computer security in the early 2000s, remember that the solution was not throw all the hacker kids in jail.

    You weren't there when Mirai caused havok due to thousands of insecure IoT devices getting pwned and turned into a botnet... and introducing more vulnerabilities is never the answer.

  • The kernel team literally already does this by the very nature of reviewing code submission. What do you think they do if not examining the incoming code to determine what, exactly, it does?

    "because I value kernel security above petty ideological concerns"

    This implies that this is the only or main way security is achieved. This is not true. Also, "valuing kernel security above other things"... is an ideological concern. You just happen to value this ideology more than other ideological concerns.

    "whether something is ethical is an opinion"

    It is, but there are bases for forming opinions on what is moral and ethical. In my opinion, secretly testing people is not ethical. Again, the difference here is consent. Plenty of organizations agree to probing/intrusion attempts; there is no reason to secretly do this. Again, security is not improved only by secret intrusion attempts.

    "there's no doubt that many illegal (and arguably unethical) acts of hacking have improved computer security"

    I don't believe in the ends justify the means argument. Either it's ethical or it isn't; whether or not security improved in the meantime is irrelevant. Security also improves in its own regard over time.

    I do agree that the way the current laws regarding "hacking" are badly worded and very punitive, but crimes are crimes. Just because you like that hacking or think it may be beneficial does not change the fact that it was unauthorized access or an intentional attempt to submit bad, buggy code, etc.

    We have to look at it exactly like we look at unauthorized access in i.e. business properties or peoples' homes. That doesn't change just because it's digital. You don't randomly walk up to your local business with a lock picking kit to "test their security". You don't randomly steal someone's wallet to "test their security". Why is the digital space any different?

    • > The kernel team literally already does this by the very nature of reviewing code submission. What do you think they do if not examining the incoming code to determine what, exactly, it does?

      Maybe that's what they claim to do, but how do you know for sure? How do you test for it?

      > This implies that this is the only or main way security is achieved.

      It doesn't, there are many facets of security, social engineering being one of them. Maybe it's controversial to test something that requires misleading people, but realistically the only alternative is to ignore the problem. I prefer not to do that.

      > Plenty of organizations agree to probing/intrusion attempts; there is no reason to secretly do this.

      Yes there is: Suppose you use some company's service and they refuse to cooperate in regards to pentesting: The "goody two-shoes" type of person just gives up. The "hacker type" puts on their grey hat and plays some golf. Is that unethical? What if they expose some massive flaw that affects millions of unwitting people?

      > I don't believe in the ends justify the means argument.

      Not all ends justify all means, but some ends do justify some means. In fact, if it's a justification to some means, it's almost certainly an end.

      > I do agree that the way the current laws regarding "hacking" are badly worded and very punitive, but crimes are crimes.

      Tautologically speaking, crimes are indeed crimes, but what are you trying to say here? Just because it's a crime doesn't mean it is unethical. Sometimes, not performing a crime is unethical.

      > You don't randomly walk up to your local business with a lock picking kit to "test their security".

      Yes, but only because that's illegal, not because it is unethical.

      > You don't randomly steal someone's wallet to "test their security".

      Again, there's nothing morally wrong with "stealing" someone's wallet and then giving it back to them. Better I do it than some pickpocket. I have been tempted on numerous occasions to do exactly that, but it's rather hard explaining yourself in such a situation...

      > Why is the digital space any different?

      Because the risk of running into a physical altercation is quite low, as is the risk of getting arrested.

      2 replies →

  • Human Research Protection Program Plan & IRB determines if something is unethical. and while these documents are based on opinions they have weight due to consensus.

    The way these (intrusive) tests (e.g. anti phishing) are performed within organizations would be with the knowledge and a very strongly worded contract between the owners of the company and the party conducting the tests.

    It is illegal in most of the world today. Even if you disagree with responsible disclosure you would be well advised not to send phishing mail to companies (whether your intention was to improve their security or not is beside the point).