It might output a much more detailed image than a human-drawn sketch which could be less useful or more damaging than the vague sketch.
Imagine that a police officer is looking for someone matching the image but doesn't know that it's hallucinated from a vague description, they could let the real suspect go or incorrectly arrest someone who happens to look like the AI generated image but otherwise doesn't have any reason to be a suspect.
Police are already greatly overestimating the accuracy of their own facial recognition tools because they don't realize the limits of the technology, and this would just be worse.
Accountability for what? I recall Procedure already requires approval of the final sketch by the witness. Witnesses could always make mistakes, but that's true even in the current process. Or is your argument sketches should never be used?
In fairness, with the ubiquity of cameras, sketches are much less required...
Our hypothetical AI won't make any decisions. It just makes sketches as described and approved by witnesses. The relevant racism here is the one any witnesses may have, that's true even with a human police sketch artist.
"as described" according to what? There is simply no way to create image from words without something closely resembling decisions. Maybe "it" won't "make" those decisions, but they will be made somewhere.
In the racism-as-individual-intentional-malice framework sure. But I'm a consequentialist on this one. If it causes disparate & unjust outcomes mediated by perceived race then describing it as racist makes sense. No intent necessary.
No one is arguing that the AI has some sort of intentional racism and inherent real intelligence - they aren't trying to anthropomorphize it.
The argument is that the output is racially discriminatory for a variety of reasons and it's easier to just say "it's racist" than "Many of the datasets that AI is trained on under- or over-represent many ethnic groups" and then dive into the details there.
It might output a much more detailed image than a human-drawn sketch which could be less useful or more damaging than the vague sketch.
Imagine that a police officer is looking for someone matching the image but doesn't know that it's hallucinated from a vague description, they could let the real suspect go or incorrectly arrest someone who happens to look like the AI generated image but otherwise doesn't have any reason to be a suspect.
Police are already greatly overestimating the accuracy of their own facial recognition tools because they don't realize the limits of the technology, and this would just be worse.
>It might output a much more detailed image than a human-drawn sketch
That's not a necessary property of AI image generation. You could just add a 'output as a sketch' system prompt.
Lack of accountability.
Accountability for what? I recall Procedure already requires approval of the final sketch by the witness. Witnesses could always make mistakes, but that's true even in the current process. Or is your argument sketches should never be used?
In fairness, with the ubiquity of cameras, sketches are much less required...
Police in your jurisdiction are held accountable?
Didn't see the comments yesterday where HN achieved consensus that racist AI might be real but isn't that bad if it is?
Our hypothetical AI won't make any decisions. It just makes sketches as described and approved by witnesses. The relevant racism here is the one any witnesses may have, that's true even with a human police sketch artist.
"as described" according to what? There is simply no way to create image from words without something closely resembling decisions. Maybe "it" won't "make" those decisions, but they will be made somewhere.
2 replies →
Not really sure you can say AI is "racist".
It can't think, or form opinions. It's not "intelligent" in any real sense.
It's just Eliza with a really, *really* big array of canned responses to interpolate between.
> Not really sure you can say AI is "racist".
> It can't think, or form opinions. It's not "intelligent" in any real sense.
Honest question, what is the purpose of this comment? What is the change you want to see coming out of this semantic argument?
1 reply →
In the racism-as-individual-intentional-malice framework sure. But I'm a consequentialist on this one. If it causes disparate & unjust outcomes mediated by perceived race then describing it as racist makes sense. No intent necessary.
No one is arguing that the AI has some sort of intentional racism and inherent real intelligence - they aren't trying to anthropomorphize it.
The argument is that the output is racially discriminatory for a variety of reasons and it's easier to just say "it's racist" than "Many of the datasets that AI is trained on under- or over-represent many ethnic groups" and then dive into the details there.
It's just Eliza with a really, really* big array of canned responses to interpolate between.*
So, just like people, then.