← Back to context

Comment by JangoSteve

6 years ago

> There's simply no indication that these aren't statistically valid priors. And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned.

I'd consider reading the sources I posted in my comment before responding with ill-conceived notions. Literally every single example I posted linked to the peer-reviewed scientific evidence (cited, published literature) indicating the points I summarized.

The only link I posted without peer-reviewed literature was the last one with the positive outcome, and that's the one I commented had suspect analysis.

Let's just consider an example; where do you draw the line in the following list? To avoid sending travelers through unsafe areas:

1. Google's routing algorithm is conditioned on demographics

2. Google's routing algorithm is conditioned on income/wealth

3. Google's routing algorithm is conditioned on crime density

4. Google's routing algorithm cannot condition on anything that would disproportionately route users away from minority neighborhoods

I think the rational choice, to avoid forcing other people to take risks that they may object to, is somewhere between 2 and 3. But the current social zeitgeist seems only to allow for option four, since an optimally sampled dataset will have very strong correlations between 1-3, to the point that in most parts of the us they would all result in the same routing bias.

  • This is exactly why I suggested actually reading the sources I posted before responding. The Google example has nothing to do with routing travelers. It was an algorithm designed to detect sentiment in online comments and to auto-delete any comments that were classified as hate-speech. The problem was that it mis-classified entire dialects of English (meaning it completely failed at determining sentiment for certain people), deleting all comments from the people of certain cultures (unfairly, disproportionately censoring a group of people). That's the dictionary definition of bias.

    • You're completely missing my point. And the purpose of my hypothetical. So let me try it with your example:

      >The problem was that it mis-classified entire dialects of English (meaning it completely failed at determining sentiment for certain people), deleting all comments from the people of certain cultures

      What happens in the case that a particular culture is more hateful? Do we just disregard any data that indicates socially unacceptable bias?

      What, only Nazis are capable of hate speech?

      1 reply →