← Back to context

Comment by fvdessen

1 year ago

What I find baffling as well is how casually people use 'whiteness' as if it was an intellectually valid concept. What does one expect to receive when asking for a picture of a white women ? A Swedish blonde ? Irish red-head ? A French brunette ? A Southern Italian ? A Lebanese ? An Irianian ? A Berber ? A Morrocan ? A Russian ? A Palestinian, A Greek, A Turk, An Arab ? Can anyone tell who of those is white or not and also tell all these people apart ? What is the use of a concept that puts the Irish and the Greek in the same basket but excludes a Lebanese ?

'White' is a term that is so loaded with prejudice and so varied across cultures that i'm not surprised that an AI used internationally would refuse to touch it with a 10 foot pole.

You are getting far too philosophical for how over the top ham fisted Gemini was. If your only interaction with this is via TheVerge article linked, I understand. But the examples going around Twitter this week were comically poor.

Were Germans in the 1800s Asian, Native American and Black? Were the founding fathers all non-White? Are country musicians majority non-White? Are drill rap musicians 100% Black women? Etc

The system prompt was artificially injecting diversity that didn't exist in the training data (possibly OK if done well).. but only in one direction.

If you asked for a prompt which the training data is majority White, it would inject majority non-White or possibly 100% non-White results. If you asked for something where the training data was majority non-White, it didn't adjust the results unless it was too male, and then it would inject female, etc.

Politically its silly, and as a consumer product its hard to understand the usefulness of this.

I'm with you right up until the last part.

If they don't feel comfortable putting all White people in one group, why are they perfectly fine shoving all Asians, Hispanics, Africans, etc into their own specific groups?

  • The irony is that the training sets are tagged well enough for the models to capture nuanced features and distinguish groups by name. However, a customer only using terms like white or black will never see any of that.

    Not long ago, a blogger wrote an article complaining that prompting for "$superStylePrompt photographs of African food" only yielded fake, generic restaurant-style images. Maybe they didn't have the vocabulary to do better, but if you prompt for "traditional Nigerian food" or jollof rice, guess what you get pictures of?

    The same goes for South, SE Asian, and Pacific Island groups. If you ask for a Gujarati kitchen or Kyoto ramenya, you get locale-specific details, architectural features, and people. Same if you use "Nordic" or "Chechen" or "Irish".

    The results of generative AI are a clearer reflection of us and our own limitations than of the technology's. We could purge the datasets of certain tags, or replace them with more explicit skin melanin content descriptors, but then it wouldn't fabricate subjective diversity in the "the entire world is a melting pot" way someone feels defines positive inclusivity.

  • I think it was Men In Black, possibly the cartoon, which parodied racism by having an alien say "All you bipeds look the same to me". And when Stargate SG-1 came out, some of the journalism about it described the character Teal'c as "African-American" just because the actor Christopher Judge, playing Teal'c, was.

    So my guess as to why, is that all this is being done from the perspective of central California, with the politics and ethical views of that place at this time. If the valley in "Silicon valley" had been the Rhine rather than Santa Clara, then the different perspective would simply have meant different, rather than no, issues: https://en.wikipedia.org/wiki/Strafgesetzbuch_section_86a#Ap...

A Swedish blonde ? yes Irish red-head ? yes A French brunette ? yes A Southern Italian ? yes A Lebanese ? no An Irianian ? no A Berber ? no A Morrocan ? no A Russian ? yes A Palestinian no, A Greek yes, A Turk no, An Arab ? no

You might quibble with a few of them but you might also (classic example) quibble over the exact definition of "chair". Just because it's a hairy complicated subjective term subject to social and policital dynamics does not make it entirely meaningless. And the difficulty of drawing an exact line between two things does not mean that they are the same. Image generation based on prompts is so super fuzzy and rife with multiple-interpretability that I don't see why the concept of "whiteness" would present any special difficulty.

I offer my sincere apologies that this reply is probably a bit tasteless, but I firmly believe the fact that any possible counterargument can only be tasteless should not lead to accepting any proposition.

  • There are plenty of Iranians, Berbers, Palestinians, Turks, and Arabs that, if they were walking down the street in NYC dressed in jeans and a tshirt, would be recognized only as "white." I'm not sure on what basis you excluded them.

    For example: https://upload.wikimedia.org/wikipedia/commons/c/c8/2018_Teh... (Iranian)

    https://upload.wikimedia.org/wikipedia/commons/9/9f/Turkish_... (Turkish)

    https://upload.wikimedia.org/wikipedia/commons/b/b2/Naderspe... (Nader was the son of Lebanese immigrants)

    Westerners frequently misunderstand this but there are a lot of "white" ethnic groups in the Middle East and North Africa; the "brown" people there are usually due to the historic contact southern Arabia had with Sub-Saharan Africa and later invasions from the east. It’s a very diverse area of the world.

  • > A Swedish blonde ? yes Irish red-head ? yes A French brunette ? yes A Southern Italian ? yes A Lebanese ? no An Irianian ? no A Berber ? no A Morrocan ? no A Russian ? yes A Palestinian no, A Greek yes, A Turk no, An Arab ? no

    > You might quibble with a few of them but you might also (classic example) quibble over the exact definition of "chair".

    This is only the case if you substitute "white" with "European", which I guess is one way to resolve the ambiguity, in the same way that one might say that only office chairs are chairs, to resolve the ambiguity about what a chair is. But other people (e.g. a manufacturer of non-office chairs) would have a problem with that redefinition.

    • Ya it's hard to be sure that when people express disdain and/or hatred of "white" people that they are or aren't including arabs. /rolleyes

      1 reply →

Well I think the issue here is that it was hesitant to generate white people in any context. You could request, for example, a Medieval English king and it would generate black women and Asian men. I don't think your criticism really applies there.

Is not just that. All of those could be white or not, but AI can't refuse to respond to a prompt based on prejudice or give wrong answers.

https://twitter.com/nearcyan/status/1760120615963439246

In this case is asked to create a image of a "happy man" and returns a women, and there is no reason to do that.

People are focusing to much on the "white people" thing but the problem is that Gemini is refusing to answer to prompts or giving wrong answers.

  • Yes, it was doing gender swaps too.. and again only in ONE direction.

    For example if you asked for a "drill rapper" it showed 100% women, lol.

    It's like some hardcoded directional bias lazily implemented.

    Even as someone in favor of diversity, one shouldn't be in favor of such a dumb implementation. It just makes us look like idiots and is fodder for the orange man & his ilk with "replacement theory" and "cancel culture" and every other manufactured drama that.. unfortunately.. the blue team leans into and validates from time to time.

How would you rewrite "white American"? American will get you black people etc as well. And you don't know their ancestry, its just a white American, likely they aren't from any single place.

So white makes sense as a concept in many contexts.

Absolutely, it's such an American-centric way of thinking. Which given the context is really ironic.

  • It's not just US-centric, it is also just wrong. What's considered white in the US wasn't always the same, especially in the founding years.

And yet, Gemini has no issues generating images for a "generic Asian" person or for a "generic Black" person. Even though the variation in those groups is even greater than in the group of "generic White".

Moreover, Gemini has no issues generating stereotypical images of those other groups (barely split into perhaps 2 to 3 stereotypes). And not just that, but US stereotypes for those groups.

  • Yeah it’s obviously screwed up which I guess is why they’re working on it. I wonder how it got passed QA? Surely the “red teaming” exercise would have exposed these issues. Heh maybe the red team testers were so biased they overlooked the issues. The ironing is delicious.

    • >I wonder how it got passed QA?

      If we take Michael Bolton's definition, "Quality is value to some person who matters" then it's very obvious exactly how it id.

      It fit an executive's vision and got greenlighted.

Absolutely, I remember talking about this a while ago about one of the other image generation tools. I think the prompt was like "Generate an American person" and it only came back with a very specific type of American person. But it's like... what is the right answer? Do you need to consult the census? Do we need the AI image generator to generate the exact demographics of the last census? Even if it did, I bet you it'd generate 10 WASP men in a row at some point and whoever was prompting it would post on twitter.

It seems obvious to me that this is just not a problem that is solvable and the AI companies are going to have to find a way to justify the public why they're not going to play this game, otherwise they are going to tie themselves up in knots.

  • But there are thousands of such ambiguities that the model resolves on the fly, and we don't find an issue with them. Ask it to "generate a dog in a car", and it might show you a labrador in a sedan in one generation, a poodle in a coupe in the next, etc. If we care about such details, then the prompt should be more specific.

    But, of course, since race is a sensitive topic, we think that this specific detail is impossible for it to answer correctly. "Correct" in this context is whatever makes sense based on the data it was trained on. When faced with an ambiguous prompt, it should cycle through the most accurate answers, but it shouldn't hallucinate data that doesn't exist.

    The only issue here is that it clearly generates wrong results from a historical standpoint, i.e. it's a hallucination. A prompt might also ask it to generate incoherent results anyway, but that shouldn't be the default result.

    • But this is a misunderstanding of what the AI does. When you say "Generate me diverse senators from the 1800s" it doesn't go to wikipedia, find out the names of US Senators from the 1800s, look up some pictures of those people and generate new images based on those images. So even if it generated 100% white senators it still wouldn't be generating historically accurate images. It simply is not a tool that can do what you're asking for.

      1 reply →

Don't forget that whiteness contracts and expands depending on the situation, location and year. It does fit in extremely well with an ever shrinking us against them that results from fascism. Even the German understanding of Aryan (and the race ranking below) was not very consistent and ever changing. They considered the Greek (and Italians) not white, and still looked up to a nonexistant ideal "historical" greek white person.

>What does one expect to receive when asking for a picture of a white women ? A Swedish blonde ? Irish red-head ?

Certainly not a black man! Come on, this wouldn't be news if it got it "close enough". Right now it gets it so hilariously wrong that it's safe to assume they're actively touching this topic rather than refusing to touch it.

I can't tell you the name of every flowers out there but if you show me a chicken I sure as hell can tell you it isn't a dandelion

It could render a diverse set of white people, for example. Or just pick one. Or you could ask for one of those people you listed.

Hats are also diverse, loaded with prejudice, and varied across cultures. Should they be removed as well from rendered images?

Worth noting this also applies to the term "black". A Somali prize fighter, a Zulu businesswoman, a pygmy hunter gatherer and a successful African American rapper don't have much in common and look pretty different.

That's BS because it clearly understands what is meant and is able describe it with words. but just refuses to generate the image. Even more funny is it starts to respond and then stops itself and gives the more "grounded" answer that it is sorry and it cannot generate the image.

It's just a skin color. The AI is free to choose whatever representation of it it wants. The issue here wasn't with people prompting images of a "white person", but of someone who is historically represented by a white person. So one would expect that kind of image, rather than something that might be considered racially diverse today.

I don't see how you can defend these results. There shouldn't be anything controversial about this. It's just another example of inherent biases in these models that should be resolved.