← Back to context

Comment by giraffe_lady

3 years ago

Didn't see the comments yesterday where HN achieved consensus that racist AI might be real but isn't that bad if it is?

Our hypothetical AI won't make any decisions. It just makes sketches as described and approved by witnesses. The relevant racism here is the one any witnesses may have, that's true even with a human police sketch artist.

  • "as described" according to what? There is simply no way to create image from words without something closely resembling decisions. Maybe "it" won't "make" those decisions, but they will be made somewhere.

    • Since you opened with passive-aggressive hints of racism, it's possible that you're not following the thread, or actually reading the replies.

      Please draw your attention to the discussion about the witness in the process of image generation. For example:

      Officer: "Could you describe the man who attacked you, miss."

      Witness: "Well, he had ...eyes, a ... forehead, and ..."

      <here's the impotent part for you, _lady>

      Officer grabs the first rendering from the machine and shows it to the witness: "Did he look like this?"

      Witness: "No, his eyes were set further apart."

      Whir, whir, the machine prints another image.

      Officer: "More like this, then?"

      And so on...

      In the scenario I described, I'm not sure where a new source of racism is introduced.

      Help me see this differently.

    • Yea, somebody will have to evaluate whether the image matches the word, and that is currently done by the witnesses themselves. How is it worse than the current state?

Not really sure you can say AI is "racist".

It can't think, or form opinions. It's not "intelligent" in any real sense.

It's just Eliza with a really, *really* big array of canned responses to interpolate between.

  • > Not really sure you can say AI is "racist".

    > It can't think, or form opinions. It's not "intelligent" in any real sense.

    Honest question, what is the purpose of this comment? What is the change you want to see coming out of this semantic argument?

  • In the racism-as-individual-intentional-malice framework sure. But I'm a consequentialist on this one. If it causes disparate & unjust outcomes mediated by perceived race then describing it as racist makes sense. No intent necessary.

  • No one is arguing that the AI has some sort of intentional racism and inherent real intelligence - they aren't trying to anthropomorphize it.

    The argument is that the output is racially discriminatory for a variety of reasons and it's easier to just say "it's racist" than "Many of the datasets that AI is trained on under- or over-represent many ethnic groups" and then dive into the details there.

  • It's just Eliza with a really, really* big array of canned responses to interpolate between.*

    So, just like people, then.