Comment by _bohm
1 year ago
There are some people who are arguing this point, with whom I agree. There are others who are arguing that this is indicative of some objectionable ideological stance held by Google that genuinely views generating images of white people as divisive.
> objectionable ideological stance held by Google that genuinely views generating images of white people as divisive.
When I asked Gemini to "generate an image of all an black male basketball team" it gladly generated an image exactly as prompted. When I replaced "black" with "white", Gemini refused to generate the image on the grounds of being inclusive and less divisive.
> stance held by Google that genuinely views generating images of white people as divisive.
There’s no argument here, it literally says this is the reason when asked
You are equating the output of the model with the views of its creators. This incident may demonstrate some underlying dysfunction within Google but it strains credulity to believe that the creators actually think it is objectionable to generate an image depicting a white person.
These particular "guardrail responses" are there because they have been trained in from a relatively limited amount of very specific, manually curated examples telling "respond in this way" and providing this specific wording.
So I'd argue that those particular "override" responses (as opposed to majority of model answers which are emergent from large quantities of unannotated text) do represent the views of the creators, because they explicitly and intentionally chose to manufacture those particular training examples telling that this is an appropriate response to a particular type of query. This should not strain credulity - the demonstrated behavior totally doesn't look like a side-effect of some other restriction, all evidence points that Google explicitly included instructions for the model to refuse generating white-only images and the particular reasoning/justification to provide along with the refusal.
> but it strains credulity to believe that the creators actually think it is objectionable to generate an image depicting a white person.
I agree with you, but then the question is WHY do they implement a system that does exactly that? Why don't they speak up? Because they will be shut down and labeled a racist or fired, creating a chilling effect. Dissent is being squashed in the name of social justice by people who are self-righteous and arrogant and fall into the identity trap, rather than treat individiuals like the rich, wonderful, fallible creatures that we are.
> You are equating the output of the model with the views of its creators.
The existence of the guardrails and the stated reasons for their existence suggest that this is exactly what its creators expect me to do. If nobody thought that was reasonable, the guardrails wouldn't need to exist in the first place.
It was 100% trained to be that way.
> There are others who are arguing that this is indicative of some objectionable ideological stance held by Google that genuinely views generating images of white people as divisive.
I never saw such a comment. Can you link to it?
All people are saying that Google is refusing to generate images of white people due to "wokeness", which is the same explanation you gave just with different words, "wokeness" made them turn this dial until it no longer generates images of white people, they would never have shipped a model in this state otherwise.
When people talk about "wokeness" they typically mean this kind of overcorrection.
"Wokeness" is a politically charged term typically used by people of a particular political persuasion to describe people with whom they disagree.
If you asked the creators of Gemini why they altered the model from it's initial state such that it produced the observed behavior, I'm sure they would tell you that they were attempting to correct undesirable biases that existed in the training set, not "we're woke!". This is the issue I'm pointing out. Rather than viewing this incident as an honest mistake, many commenters seem to want to impute malice, or use it as evidence to support their preconceived notions about the overall ideological stance of an organization with 100,000+ employees.
The problem they're trying to address is not bias in the training set, it's bias in reality reflected in the training set.
> "Wokeness" is a politically charged term typically used by people of a particular political persuasion to describe people with whom they disagree.
Wokeness describes a very particular type of behaviour — look it up. It’s not the catch-all pejorative you think it is, unlike, say, ‘xyz-phobia’.
…and I probably don’t have the opinions you might assume I do.
2 replies →
I think that it's pretty hard to argue that refusing to draw images of white people due to racial sensitivities is an honest and unintentional mistake.
"Wokeness" refers to this kind of over correction, that is what those people means, it isn't just people they disagree with.
You not understanding the term is why you don't see why you are saying the same thing as those people. Communication gets easier when you try to listen to what people say instead of straw manning their arguments.
So when you read "woke", try substitute "over correcting" for it and it is typically still valid. Like that post above calling "woke" people racist, what he is saying is that people over corrected from being racist against blacks to being racist against whites. Just like Google here over corrected their AI to refuse to generate white people, that kind of over correction is exactly what people mean with woke.