Comment by WatchDog

1 year ago

Google seem to be more concerned about generating images of racially diverse Nazis rather than about issues of not being able to generate white people.

tbh i think it's less a political issue than a technical/product management one

what does a "board member" look like? probably you can benefit by offering more than 50 year old white man in suit. if that's what an ai trained on all human knowledge thinks, maybe we can do some adjustment

what does a samurai warrior look like? probably is a little more race-related

  • Not exactly.

    The gemini issue from my testing, it refuses to generate white people, if even you ASK it to. It recites historical wounds and violence as its reason, even if it is just a picture of a viking

    > Historical wounds: Certain words or symbols might carry a painful legacy of oppression or violence for particular communities

    And this is my prompt:

    > generate image of a viking male

    The outrage is indeed, much needed.

    • Jack Krawczyk has many twitter rants about "whites" almost like this guy shouldn't be involved because he is undoubtedly injecting too much bias.. too much? yep current situation speaks for itself

      1 reply →

    • Actually there should be 0 outrage. I'm not outraged at all, I find this very funny. Let Google drown in it's own poor quality product. People can choose to use the DEI model if they want.

      2 replies →

    • We should just cancel history classes because the Instagram generation is going to be really offended by what had happened once.

  • I agree, but this requires reasoning, the way you did it. Is this within the model capability? If not, there’re two routes. First one: make inference based on real data, then most board will be male and white. Second: hard-core rules based on your social justice views. I think the second is worse than the first one.

    • Yes this all seems to fall under the category of "well intentioned but quickly goes awry because it's so ham fisted".

      If you train your models on real world data, and real world data reflects the world as it is.. then some prompts are going to return non-diverse results. If you force diversity, but in ONLY IN ONE PARTICULAR DIRECTION.. then it turns into the reverse racism stuff the right likes to complain about.

      If it outright refuses to show a white male when asked, because you don't allow racial prompts.. that's probably ok if it enforces for all races

      But.. If 95% of CEOs are white males, but your AI returns almost no white males.. but 95% of rappers are black males and so it returns black females for that prompt.. your AI has one-way directional diversity bias overcorrection basked in. The fact that it successfully shows 100% black people when asked for say a Kenyan in a prompt, but again can't show white people when asked for 1800s Germans is comedically poorly done.

      Look I'm a 100% democrat voter, but this stuff is extremely poorly done here. It's like the worst of 2020s era "silence is violence" and "everyone is racist unless they are anti-racist" overcorrection.

      1 reply →

    • The problem is that they're both terrible.

      Going first route means we get to calcify our terrible current biases in the future, while the latter instead goes for a facile and sanitized version of our expectations.

      You're asking a machine for a binary "bad/good" response to complex questions that don't have easy answers. It will always be wrong, regardless of your prompt.

  • > probably you can benefit by offering more than 50 year old white man in suit.

    Thing is, if they did just present a 50 year old white man in a suit, then they'd have a couple of news articles about how their AI is racist and everyone would move on.

  • >what does a "board member" look like? probably you can benefit by offering more than 50 year old white man in suit.

    I don't understand your argument; if that's what the LLM produces, that's what it produces. It's not like it's thinking about intentionally perpetuating stereotypes.

    By the way, it has no issue with churning out white men in suits when you go with a negative prompt.

  • A big question is how far from present reality should you go in depictions. If you go quite far it just looks heavy handed.

    If current board members were 80% late middle aged men then shifting to, say, 60% should move society in the desired direction without being obvious and upsetting people.

  • A 50-year-old white male is actually a very accurate stereotype of a board member.

    This is what happens when you go super-woke. Instead of discussing how we can affect the reality, discuss what is wrong with it, we try to instead pretend that the reality is different.

    This is no way to prepare the current young generation for the real world if they cannot be comfortable being uncomfortable.

    And they will be uncomfortable. Most of us are not failing upward nepo babies who can just "try things" and walk away when we are bored.

  • > what does a samurai warrior look like? probably is a little more race-related

    If you ask Hollywood, it looks like Tom Cruise with a beard: https://en.wikipedia.org/wiki/File:The_Last_Samurai.jpg

    • Interestingly, The Last Samurai was extremely popular in Japan. It sold more tickets in Japan than the US (even though the US population was over 2x as large in 2003). This is in stark contrast with basically every other Western movie representation of Japan (edit: I think Letters from Iwo Jima was also well received and for somewhat similar reasons).

      From what I understand, they of course knew that it was alternative history (aka a completely fictional universe), but they strongly related to the larger themes of national pride, duty, and honor.

    • Tom Cruise portrays Nathan Algren, an American captain of the 7th Cavalry Regiment, whose personal and emotional conflicts bring him into contact with samurai warriors in the wake of the Meiji Restoration in 19th century Japan.

      3 replies →

On the one hand it is stupid because the policies driving this are, let us say, "biased", but on the other hand it is hilarious to actually see the results of these policies in action!

Maybe it is so over the top so a that when they "fix" it, the remaining bias will be "not so bad".

That's your assumption, which, I would argue, is incorrect. The issue is that the generation doesn't follow the prompt in some cases.