Comment by Jensson
1 year ago
> The sensible alternative explanation is that this issue is an overcorrection made in an attempt to address well-documented biases these models have when not fine tuned.
That is what all these people are arguing, so you agree with them here. If people didn't complain then this wouldn't get fixed.
There are some people who are arguing this point, with whom I agree. There are others who are arguing that this is indicative of some objectionable ideological stance held by Google that genuinely views generating images of white people as divisive.
> objectionable ideological stance held by Google that genuinely views generating images of white people as divisive.
When I asked Gemini to "generate an image of all an black male basketball team" it gladly generated an image exactly as prompted. When I replaced "black" with "white", Gemini refused to generate the image on the grounds of being inclusive and less divisive.
> stance held by Google that genuinely views generating images of white people as divisive.
There’s no argument here, it literally says this is the reason when asked
You are equating the output of the model with the views of its creators. This incident may demonstrate some underlying dysfunction within Google but it strains credulity to believe that the creators actually think it is objectionable to generate an image depicting a white person.
4 replies →
> There are others who are arguing that this is indicative of some objectionable ideological stance held by Google that genuinely views generating images of white people as divisive.
I never saw such a comment. Can you link to it?
All people are saying that Google is refusing to generate images of white people due to "wokeness", which is the same explanation you gave just with different words, "wokeness" made them turn this dial until it no longer generates images of white people, they would never have shipped a model in this state otherwise.
When people talk about "wokeness" they typically mean this kind of overcorrection.
"Wokeness" is a politically charged term typically used by people of a particular political persuasion to describe people with whom they disagree.
If you asked the creators of Gemini why they altered the model from it's initial state such that it produced the observed behavior, I'm sure they would tell you that they were attempting to correct undesirable biases that existed in the training set, not "we're woke!". This is the issue I'm pointing out. Rather than viewing this incident as an honest mistake, many commenters seem to want to impute malice, or use it as evidence to support their preconceived notions about the overall ideological stance of an organization with 100,000+ employees.
6 replies →