← Back to context

Comment by Xelynega

2 months ago

In both those examples, why are you not giving the benefit of the doubt to the failed attempts?

If GitHub attempted to anonymize applications and resulted in a biased selection, can that not be a result of them failing to eliminate the bias they set out to?

Same with the blind auditions for orchestras, if they found that they weren't actually eliminating bias with the stated methods, why is it bad that they're not doing it anymore?

If you don't know anything about the other person and are selecting blindly, there's no bias by definition, so that particular selection is not biased regardless of what it looks like.

If the resulting distribution is not what you expected it to be, then there are two simple explanations: either your model was wrong, or the bias that causes the deviation is happening on an earlier stage in the process.

At the same time, if going from non-blind to blind changes the result, it means that there was bias that had been eliminated. The second article pretty much openly admits it and then demands that it be reinstated to produce the numbers that they would like to see.

The question is whether or not the results are biased. Maybe the best musicians tend to be male? It is hard to argue bias in a blind musical audition.

  • And it's hard to argue that your method you "thought" would eliminate bias actually does eliminate the bias you set out to eliminate.

    The only way to do that is compare results with expectations, no?