← Back to context

Comment by ajross

1 year ago

Oooph again. Which is the root of the problem. The statement "All American criminals are black" is, OK, maybe true to first order (I don't have stats and I'm not going to look for them).

But, first, on a technical level first order logic like that leads to bad decisions. And second, it's clearly racist. And people don't want their products being racist. That desire is pretty clear, right? It's not "systemic racism" to want that, right?

>"All American criminals are black"

I'm not even sure it's worth arguing, but who ever says that? Why go to a strawman?

However, looking at the data, if you see that X race commits crime (or is the victim of crime) at a rate disproportionate to their place in the population, is that racist? Or is it useful to know to work on reducing crime?

  • > I'm not even sure it's worth arguing, but who ever says that? Why go to a strawman?

    The grandparent post called a putative ML that guessed that all criminals were black a "wise guess", I think you just missed the context in all the culture war flaming?

    • I didn't say "assuming all criminals are black is a wise guess." What I meant to point out was that even if black people constitute even 51% of the prison population, the model would still be making a statistically-sound guess by returning an image of a black person.

      Now if you asked for 100 images of criminals, and all of them were black, that would not be statistically-sound anymore.