← Back to context

Comment by imiric

1 year ago

I'm not arguing from a technical perspective, but from a logical one as a user of these tools.

If I ask it to generate an image of a "person", surely it understands what I mean based on its training data. So the output should fit the description of "person", but it should be free to choose every other detail _also_ based on its training data. So it should make a decision about the person's sex, skin color, hair color, eye color, etc., just as it should decide about the background, and anything else in the image. That is, when faced with ambiguity, it should make a _plausible_ decision.

But it _definitely_ shouldn't show me a person with purple skin color and no eyes, because that's not based in reality[1], unless I specifically ask it to.

If the technology can't give us these assurances, then it's clearly an issue that should be resolved. I'm not an AI engineer, so it's out of my wheelhouse to say how.

[1]: Or, at the very least, there have been very few people that match that description, so there should be a very small chance for it to produce such output.