← Back to context

Comment by pcdoodle

5 years ago

Cool article.

In a similar vein, by the same author: https://www.mathwashing.com/

(It's linked at the bottom of this one, but I'm sure a lot of people don't get that far)

  • That is my number one pissing point right now in higher education.

    Every company has a predictive algorithm to use on students. Every startup that's stepping into the space is pushing the data and data scientists.

    But they all have the same-old, usually decades old, baked in biases. AND they're not doing anything to address it!

    Just because it's math doesn't mean it's not biased. I hate it more than anything professionally, right now.

    • > Just because it's math doesn't mean it's not biased.

      When a model produces an unpalatable result, that doesn't mean it is biased. All these algorithmic fairness people are saying, once you peel back the layers of rhetorical obfuscation, is that we should make ML models lie. Lying helps nobody in the long run.

      4 replies →

  • It takes years and years of training in advanced dialectic bullshit to get to the point where you can say, with a straight face, that math is morally wrong. It's utterly absurd to demand that we censor models and worsen their output to conform to some activist's idealized imagine of how the world should be. Only by letting models report the true facts of the world as it is can we optimize for human happiness.

    • Math is a language for modelling things. It can be intrinsically correct, as in consistent, but that doesn't say anything about the model's actually validity.

      Every choice we make is a moral choice. Once we're done modelling and use that model then we make a moral choice.

      For example, If you believe that lowering the debt default rates is more important than the fairness to an individual.

      Then you make a moral choice. Of you believe it is OK to not give loans to Blacks because there's a largish amount of Blacks defaulting on their loans thats a moral choice.

      Further more, Enscribnng truth to models is just an age old human fallacy. The truth can somewhat fit plenty of models. None of the models are truth.

    • It's not the math, that is wrong. The math is correct.

      The inputs and assumptions made by the people selecting the math is the 'morally wrong' part.

      Bias is real, like it or not. Your worldview, as a data scientist or programmer or whatever, impacts what you select as important factors in 'algorithm a'. Algorithm a then selects those factors for other people in the system, baking in your biases, but screening them behind math.

      2 replies →