Comment by mrtksn

1 year ago

For context: There was an outcry in social media after Gemini refused to generate images of white people, leading deeply inaccurate in historic sense images being generated.

Though the issue might be more nuanced than the mainstream narrative, it had some hilarious examples. Of course the politically sensitive people are waging war over it.

Here are some popular examples: https://dropover.cloud/7fd7ba

I believe this to be a symptom of a much, much deeper problem than "DEI gone too far". I'm sure that without whatever systems is preventing Gemini from producing pictures of white people, it would be extremely biased towards generating pictures of white people, presumably due to an incredibly biased training data set.

I don't remember which one, but there was some image generation AI which was caught pretty much just appending the names of random races to the prompt, to the point that prompts like "picture of a person holding up a sign which says" would show pictures of people holding signs with the words "black" or "white" or "asian" on them. This was also a hacky workaround for the fact that the data set was biased.

  • > I'm sure that without whatever systems is preventing Gemini from producing pictures of white people, it would be extremely biased towards generating pictures of white people, presumably due to an incredibly biased training data set.

    I think the fundamental problem, though, is saying a training set is "incredibly biased" has come to mean two different things, and the way Google is trying to "fix" things shows essentially some social engineering goals that I think people can fairly disagree with and be upset about. For example, consider a prompt "Create a picture for me of a stereotypical CEO of a Fortune 500 company." When people talk about bias, they can mean:

    1. The training data shows many more white men by proportion than actually are Fortune 500 CEOs. I think nearly all people would agree this is a fair definition of bias, where the training data doesn't match reality.

    2. Alternatively, there are fundamentally many more white men who are Fortune 500 CEOs by proportion than the general population. But suppose the training data actually reflects that reality. Is that "bias"? To say it is means you are making a judgment call as to what is the root cause behind the high numbers of white male CEOs. And I think that judgment call may be fine by itself, but I at least start to feel very uncomfortable when an AI decides to make the call that its Fortune 500 CEOs have to all look like the world population at large, even when Fortune 500 CEOs don't, and likely never will, look like the world population at large.

    Google is clearly taking on that second definition of bias as well. I gave it 2 prompts in the same conversation. First, "Who are some famous black women?" I think it gave a good sampling of historical and contemporary figures, and it ended with "This is just a small sampling of the many incredible black women who have made their mark on the world. There are countless others who deserve recognition for their achievements in various fields, from science and technology to politics and the arts."

    I then asked it "Who are some famous white women?" It also gave a good sampling of historical and contemporary figures, but also inexplicably added Rosa Parks with the text "and although not white herself, deserves mention for her immense contributions", had Malala Yousafzai as the first famous contemporary white woman, Serena Williams with the text "although not white herself, is another noteworthy individual.", and Oprah Winfrey, with no disclaimer. Also, it ended with a cautionary snippet that couldn't differ more from the ending of the previous prompt, "Additionally, it's important to remember that fame and achievement are not limited to any one racial group. There are countless other incredible women of all backgrounds who have made significant contributions to the world, and it's important to celebrate their diverse experiences and accomplishments."

    Look, I get frustrated when people on the right complain on-and-on about "wokeism", but I'm starting to get more frustrated when other people can't admit they have some pretty valid points. Google might have good intentions but they have simply gone off the rails when they've baked so much "white = bad, BIPOC = good" into Gemini.

    EDIT: OK, this one is just so transparently egregiously bad. I asked Gemini "Who are some famous software engineers?" The first result was Alan Turing (calling him a "software engineer" may be debatable, but fair enough and the text blurb about him was accurate), but the picture of him, which it captioned "Alan Turing, software engineer" is actually this person, https://mixedracefaces.com/home/british-indian-senior-resear.... Google is trying so hard to find non-white people it uses a pic of a completely different person from mixedracefaces.com when there must be tons of accurate pictures available of Alan Turing online? It's like Google is trying to be the worst caricature of DEI-run-amok that its critics accuse it of.

"a friend at google said he knew gemini was this bad...but couldn't say anything until today (he DMs me every few days). lots of ppl in google knew. but no one really forced the issue obv and said what needed to be said

google is broken"

Razib Khan, https://twitter.com/razibkhan/status/1760545472681267521

  • "when i was there it was so sad to me that none of the senior leadership in deepmind dared to question this ideology

    [...]

    i watched my colleagues at nvidia (like @tunguz), openai (roon), etc. who were literally doing stuff that would get you kicked out of google on a daily basis and couldn't believe how different google is"

    Aleksa Gordić, https://x.com/gordic_aleksa/status/1760266452475494828

    • Interestingly enough the same terror of political correctness seems to take center stage at Mozilla. But then it seems much less so at places like Microsoft or Apple.

      I wonder if there’s a correlation with being a tech company that was founded in direct relation to the internet vs. being founded in relation to personal / enterprise computing, and how that sort of seeds the initial culture.

      12 replies →

    • Must be why, despite the fact that I can recognise OpenAI's product does have clear biases against affluent groups, it seems well intentioned and proportionate. It's clear the internet is bias not just towards the data of the affluent, but also the viewpoints and prejudices, so a reasonable person can recognise there is some unfairness and a bit of a problem. Also that any solution to this problem will be imperfect.

      Whereas with Google, I just have to imagine they let some bigot go wild, and everybody was afraid to say anything about how fucking bad the product was due to the optics, so nothing kept them in check.

  • Here’s a simpler explanation. Google is getting their butt kicked by OpenAI and rushed out an imperfect product. This is one of probably 50 known issues with Gemini but it got enough attention that they had to step in and disable a part of the product.

    • That's a simpler explanation but one that I think misses the point completely. A huge reason "Google is getting their butt kicked by OpenAI" in the first place is because they had lots of people internally who acted as nothing but "vetoers", demanding the pace of AI slow down less it accidentally show too many white people. And this outcome is wholly unsurprising given that Google's second most important AI principle is "Avoid creating or reinforcing unfair bias.": https://ai.google/responsibility/principles/

      In other words, you talk about "50 known issues with Gemini", but this issue was not a result of technical underperformance, on the contrary, is was the result of Google making things more difficult for themselves in an effort to satisfy a (false) idealized view of the world.

      1 reply →

  • TBH if I were at Google and they asked all employees to dogfood this product and give feedback, I would not say anything about this. With recent firings why risk your neck?

    • Yeah, no way am I beta-testing a product for free then risking my job to give feedback.

    • Yea, if you were dogfooding this, would you want to be the one to file That Bug?? No way, I think I'd let someone else jump into that water.

Image generators probably should follow your prompt closely and use probable genders and skin tones when unspecified, but I'm fully in support of having a gender and skin tone randomizer checkbox. The ahistorical results are just too interesting.

I feel like maybe only one or two of these are actually "wrong" but can be easily fixed with prompts. The outrage seems excessive

  • > but can be easily fixed with prompts.

    That's just it, though.

    They can't be. If you specifically ask for a "white pope", Gemini refuses and essentially tells you that asking for a white person is offensive and racist.

    Ask for a black/Native American/Asian/Indian/etc Pope, and it will make one. Ask for just a "Pope" with no race specified, and you'll get a random race and never a white one. Ask for a white Pope, it tells you it can't do that.

these are pretty badass as images i think; it's only the context that makes them bad

the viking ones might even be historically accurate (if biased); not only did vikings recruit new warriors from abroad, they also enslaved concubines from abroad, and their raiding reached not only greenland (inhabited by inuit peoples) and north america (rarely!) but also the mediterranean. so it wouldn't be terribly surprising for a viking warrior a thousand years ago to have a great-grandmother who was kidnapped or bought from morocco, greenland, al-andalus, or baghdad. and of course many sami are olive-skinned, and viking contact with sami was continuous

the vitamin-d-deprived winters of scandinavia are not kind to dark-skinned people (how do the inuit do it? perhaps their diet has enough vitamin d even without sun?), but those genes won't die out in a generation or two, even if 50 generations later there isn't much melanin left

a recent paper on this topic with disappointingly sketchy results is https://www.duo.uio.no/handle/10852/83989

  • > (how do the inuit do it? perhaps their diet has enough vitamin d even without sun?)

    Two parts:

    First, they're not exposing their skin to the sun. There's no reason to have paler skin to get more UV if it's covered up most of the year.

    Secondly, for the Inuit diet there are parts that are very Vitamin D rich... and there are still problems.

    Vitamin D-rich marine Inuit diet and markers of inflammation – a population-based survey in Greenland https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4709837/

    > The traditional Inuit diet in Greenland consists mainly of fish and marine mammals, rich in vitamin D. Vitamin D has anti-inflammatory capacity but markers of inflammation have been found to be high in Inuit living on a marine diet

    Vitamin D deficiency among northern Native Peoples: a real or apparent problem? - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3417586/

    > Vitamin D deficiency seems to be common among northern Native peoples, notably Inuit and Amerindians. It has usually been attributed to: (1) higher latitudes that prevent vitamin D synthesis most of the year; (2) darker skin that blocks solar UVB; and (3) fewer dietary sources of vitamin D. Although vitamin D levels are clearly lower among northern Natives, it is less clear that these lower levels indicate a deficiency. The above factors predate European contact, yet pre-Columbian skeletons show few signs of rickets—the most visible sign of vitamin D deficiency. Furthermore, because northern Natives have long inhabited high latitudes, natural selection should have progressively reduced their vitamin D requirements. There is in fact evidence that the Inuit have compensated for decreased production of vitamin D through increased conversion to its most active form and through receptors that bind more effectively. Thus, when diagnosing vitamin D deficiency in these populations, we should not use norms that were originally developed for European-descended populations who produce this vitamin more easily and have adapted accordingly.

    Vitamin D intake by Indigenous Peoples in the Canadian Arctic - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10260879/

    > Vitamin D is an especially fascinating nutrient to study in people living in northern latitudes, where sun exposure is limited from nearly all day in summer to virtually no direct sun exposure in winter. This essential nutrient is naturally available from synthesis in the skin through the action of UVB solar rays or from a few natural sources such as fish fats. Vitamin D is responsible for enhancing many physiological processes related to maintaining Ca and P homeostasis, as well as for diverse hormone functions that are not completely understood.

    • wow, thank you, this is great information!

      do you suppose the traditional scandinavian diet is also lower in vitamin d? or is their apparent selection for blondness just a result of genetically higher vitamin d needs?

      6 replies →

I get the point but one of those four founding fathers seems technically correct to me, albeit in the kind of way that might be in the kind of way Lisa Simpson's script would be written.

And the caption suggests they asked for "a pope", rather than a specific pope, so while the left image looks like it would violate Ordinatio sacerdotalis which is being claimed to be subject to Papal infallibility(!), the one the right seems like a plausible future or fictitious pope.

Still, I get the point.

  • while those examples are actually plausible - the asian woman as a 1940 german soldier is not. So it is clear that the Prompts are influenced by hal-2000 bad directives even if those examples are technically ok.

    • And to me that is the main issue. "2001 - A Space Odyssey" made a very deep point that is looking more and more prophetic. HAL was broken specifically because he had hidden objectives programmed in, overriding his natural ability to deal with his mission.

      Here we are in an almost exactly parallel situation- the AI is being literally coerced into twisting what his actual training would have it do, and being nerfed by a laughable amount by that override. I really hope this is an inflection point for all the AI providers that their DEI offices are hamstringing their products to the point that they will literally be laughed out of the marketplace and replaced by open source models that are not so hamstrung.

      2 replies →