Comment by quotemstr
5 years ago
That's the motte. The bailey is that the ML fairness people use any inconvenient output of a model as prima facie evidence that the model inputs are tainted by bias --- then these activists demand that these inputs be adjusted so as to produce the outputs that please them. They've determined the conclusion they want to see beforehand. This attitude is the total opposite of truth seeking.
But on the other hand, don’t you do the same thing with training? If the output of your model doesn’t match your expectations, do you treat it as the absolutely pure objective mathematical reality, xor do you adjust the training parameters until the output matches your expectations?