Comment by mola

5 years ago

Oh wow, this is so juvenile.

The problem with these ML models is that they are directly connected to a response, and because they use statistics and math they are simplistically perceived as truth. They're not truth, they're are nothing more than models. The truth can fit a plethora of models. Ignoring human bias while training ML models, and then just saying that the model is truth is exactly the problem.

Thank you for demonstrating the issue so vividly.

> Oh wow, this is so juvenile.

Please try to elevate the debate. See [1]. You're at DH0 right now.

> They're not truth, they're are nothing more than models. The truth can fit a plethora of models.

Models receive past data and emit predictions. We can then see how well those predictions match future data. We call one model "better" than another model when that first models' predictions more closely match future data than the second model's. Not all models are equivalent. The ML fairness people want to make model predictions less accurate because they don't like what the predictions say. Prioritizing truth over pleasantness isn't juvenile: it's the opposite. The mark of maturity is the willingness to accept an unpleasant reality instead of denying it.

[1] http://www.paulgraham.com/disagree.html