← Back to context

Comment by yyyk

3 years ago

Our hypothetical AI won't make any decisions. It just makes sketches as described and approved by witnesses. The relevant racism here is the one any witnesses may have, that's true even with a human police sketch artist.

"as described" according to what? There is simply no way to create image from words without something closely resembling decisions. Maybe "it" won't "make" those decisions, but they will be made somewhere.

  • Since you opened with passive-aggressive hints of racism, it's possible that you're not following the thread, or actually reading the replies.

    Please draw your attention to the discussion about the witness in the process of image generation. For example:

    Officer: "Could you describe the man who attacked you, miss."

    Witness: "Well, he had ...eyes, a ... forehead, and ..."

    <here's the impotent part for you, _lady>

    Officer grabs the first rendering from the machine and shows it to the witness: "Did he look like this?"

    Witness: "No, his eyes were set further apart."

    Whir, whir, the machine prints another image.

    Officer: "More like this, then?"

    And so on...

    In the scenario I described, I'm not sure where a new source of racism is introduced.

    Help me see this differently.

  • Yea, somebody will have to evaluate whether the image matches the word, and that is currently done by the witnesses themselves. How is it worse than the current state?