Comment by Loughla
5 years ago
>When a model produces an unpalatable result, that doesn't mean it is biased.
Absolutely, but there is no admission, from what I can tell, from ML or predictive [buzzword here] companies that bias is a thing, or even could be a thing in their systems.
>All these algorithmic fairness people are saying, once you peel back the layers of rhetorical obfuscation, is that we should make ML models lie. Lying helps nobody in the long run.
Maybe I'm misunderstanding, but that is not at all what I am saying as an 'algorithmic fairness' person. I am saying that we need to ensure there are strict oversights and controls on the building/execution of algorithms when making substantive decisions about human people.
For example: It's okay if an algorithm predicting student success says that all the minority students on my campus are at a higher risk of dropping out. That is a data point. Historically, minority students drop out at a higher rate. Sure. Not great, but it is factually true.
What is not okay is for the 'predictive analytics' company to sell their product in conjunction with a 'tracking' product that limits minority students' access to selective admissions programs simply because they are selective, more difficult, and, historically, have a higher percent of minority students who drop out.
I guess what I'm saying is that ML models shouldn't lie. But they also shouldn't be seen as the truth above all truths. Because they're not. They're just data, interpreted through the lens of whoever built the models.
Every human carries a bias, everyone. It's how we define ourselves as 'self' and others as 'other' at a basic level.
Therefore, everything we build, especially when it's meant to be intuitive, may carry those biases forward.
I'm only saying we need to be aware of that, acknowledge it, and ensure there are appropriate controls and oversight to ensure the biases aren't exasperated inappropriately.
No comments yet
Contribute on Hacker News ↗