Comment by shitpostbot
1 year ago
Seems like all they need to do is, when prompted to generate images of people, ask for clarification on whether the user wants to constrain the appearance, use the default output of the model, or otherwise offer to modify the prompt to reduce biases or however they would describe it. Doesn't even have to be interactive, just a note on the side would be enough.
Ultimately the only "real" concern was silently perpetuating biases, as long as it isn't silent and the user is made aware of the options, who cares? You'll never be able to baby-proof these things enough to stop "bad actors" from generating whatever they want without compromising the actual usage
No comments yet
Contribute on Hacker News ↗