Comment by vidarh
1 year ago
I agree there aren't any perfect solutions, but a reasonable solution is to go 1) if the user specifies, generally accept that (none of these providers will be willing to do so without some safeguards, but for the most part there are few compelling reasons not to), 2) if the user doesn't specify, priority one ought to be that it is consistent with history and setting, and only then do you aim for plausible diversity.
Ask for a nurse? There's no reason every nurse generated should be white, or a woman. In fact, unless you take the requestors location into account there's every reason why the nurse should be white far less than a majority of the time. If you ask for a "nurse in [specific location]", sure, adjust accordingly.
I want more diversity, and I want them to take it into account and correct for biases, but not when 1) users are asking for something specific, or 2) where it distorts history, because neither of those two helps either the case for diversity, or opposition to systemic racism.
Maybe they should also include explanations of assumptions in the output. "Since you did not state X, an assumption of Y because of [insert stat] has been implied" would be useful for a lot more than character ethnicity.
> Maybe they should also include explanations of assumptions in the output.
I think you're giving these systems a lot more "reasoning" credit than they deserve. As far as I know they don't make assumptions they just apply a weighted series of probabilities and make output. They also can't explain why they chose the weights because they didn't, they were programmed with them.
Depends entirely on how the limits are imposed. E.g. one way of imposing them that definitely does allow you to generate explanations is how gpt imposes additional limitations on the Dalle output by generating a Dalle prompt from the gpt prompt with the addition of limitations imposed by the gpt system prompt. If you need/want explainability, you very much can build scaffolding around the image generation to adjust the output in ways that you can explain.
Why not just randomize the gender, age, race, etc and be done with it? That way if someone is offended or under- or over-represented it will only be by accident.
The whole point of this discussion is various counterexamples where Gemini did "just randomize the gender, age, race" and kept generating female popes, African nazis, Asian vikings etc even when explicitly prompted to do the white male version. Not all contexts are or should be diverse by default.
I agree. But it sounds like they didn't randomize them. They made it so they explicitly can't be white. Random would mean put all the options into a hat and pull one out. This makes sense at least for non-historical contexts.
1 reply →