← Back to context

Comment by yowlingcat

4 years ago

I think you're getting heavily downvoted with your comments on this submission because you seem to be missing a critical sociological dimension of assumed trust. If you submit a patch from a real name email, you get an extra dimension of human trust and likewise an extra dimension of human repercussions if your actions are deemed to be malicious.

You're criticizing the process, but the truth is that without a real name email and an actual human being's "social credit" to be burned, there's no proof these researchers would have achieved the same findings. The more interesting question to me is if they had used anonymous emails, would they have achieved the same results? If so, there might be some substance to your contrarian views that the process itself is flawed. But as it stands, I'm not sure that's the case.

Why? Well, look at what happened. The maintainers found out and blanket banned bad actors. Going to be a little hard to reproduce that research now, isn't it? Arbitraging societal trust for research doesn't just bring ethical challenges but /practical/ ones involving US law and standards for academic research.

> actual human being's "social credit" to be burned

How are kernel maintainers competent in detecting a real person vs. fake real person? Why is there any inherit trust?

It's clear the system is fallible, but at least now people are humbled enough to not instantly dismiss the risk.

> The maintainers found out and blanket banned bad actors.

With collateral damage.

  • the mail server is usually a pretty good indicator. I'm not an expert but you generally can't get a university email address without being enrolled.

    • Additionally some universities use a subdomain for student addresses, only making top level email addresses available to staff and a small selection of PhD students who needs it for their research.