← Back to context

Comment by quotemstr

5 years ago

It takes years and years of training in advanced dialectic bullshit to get to the point where you can say, with a straight face, that math is morally wrong. It's utterly absurd to demand that we censor models and worsen their output to conform to some activist's idealized imagine of how the world should be. Only by letting models report the true facts of the world as it is can we optimize for human happiness.

Math is a language for modelling things. It can be intrinsically correct, as in consistent, but that doesn't say anything about the model's actually validity.

Every choice we make is a moral choice. Once we're done modelling and use that model then we make a moral choice.

For example, If you believe that lowering the debt default rates is more important than the fairness to an individual.

Then you make a moral choice. Of you believe it is OK to not give loans to Blacks because there's a largish amount of Blacks defaulting on their loans thats a moral choice.

Further more, Enscribnng truth to models is just an age old human fallacy. The truth can somewhat fit plenty of models. None of the models are truth.

It's not the math, that is wrong. The math is correct.

The inputs and assumptions made by the people selecting the math is the 'morally wrong' part.

Bias is real, like it or not. Your worldview, as a data scientist or programmer or whatever, impacts what you select as important factors in 'algorithm a'. Algorithm a then selects those factors for other people in the system, baking in your biases, but screening them behind math.

  • That's the motte. The bailey is that the ML fairness people use any inconvenient output of a model as prima facie evidence that the model inputs are tainted by bias --- then these activists demand that these inputs be adjusted so as to produce the outputs that please them. They've determined the conclusion they want to see beforehand. This attitude is the total opposite of truth seeking.

    • But on the other hand, don’t you do the same thing with training? If the output of your model doesn’t match your expectations, do you treat it as the absolutely pure objective mathematical reality, xor do you adjust the training parameters until the output matches your expectations?