Comment by Loughla

5 years ago

It's not the math, that is wrong. The math is correct.

The inputs and assumptions made by the people selecting the math is the 'morally wrong' part.

Bias is real, like it or not. Your worldview, as a data scientist or programmer or whatever, impacts what you select as important factors in 'algorithm a'. Algorithm a then selects those factors for other people in the system, baking in your biases, but screening them behind math.

That's the motte. The bailey is that the ML fairness people use any inconvenient output of a model as prima facie evidence that the model inputs are tainted by bias --- then these activists demand that these inputs be adjusted so as to produce the outputs that please them. They've determined the conclusion they want to see beforehand. This attitude is the total opposite of truth seeking.

  • But on the other hand, don’t you do the same thing with training? If the output of your model doesn’t match your expectations, do you treat it as the absolutely pure objective mathematical reality, xor do you adjust the training parameters until the output matches your expectations?