Comment by oceanplexian

1 year ago

No one is upset that an algorithm accidentally generated some images, they are upset that Google intentionally designed it to misrepresent reality in the name of Social Justice.

“Misrepresenting reality” is an interesting phrase, considering the nature of what we are discussing - artificially generated imagery.

It’s really hard to get these things right: if you don’t attempt to influence the model at all, the nature of the imagery that these systems are being trained on skews towards stereotype, because a lot of our imagery is biased and stereotypical. It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people.

In this case it fails because it is not using broader historical and social context and it is not nuanced enough to be flexible about how it obtains the diversity- if you asked it to generate some WW2 American soldiers, you could rightfully include other ethnicities and genders than just white men, but it would have to be specific about their roles, uniforms, etc.

(Note: I work at Google, but not on this, and just my opinions)

  • > It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people.

    When stereotypes clash with historical facts, facts should win.

    Hallucinating diversity where there was none simply sweeps historical failures under the rug.

    If it wants to take a situation where diversity is possible and highlight that diversity, fine. But that seems a tall order for LLMs these days, as it's getting into historical comprehension.

    • >Hallucinating diversity where there was none simply sweeps historical failures under the rug.

      Failures and successes. You can't get this thing to generate any white people at all, no matter how explicitly or implicitly you ask.

      5 replies →

    • I think the root problem is assuming that these generated images are representations of anything.

      Nobody should.

      They’re literally semi-random graphic artifacts that we humans give 100% of the meaning to.

      8 replies →

    • Why should facts win? It's art, and there are no rules in art. I could draw black george washington too.

      [edit]

      Statistical inference machines following human language prompts that include "please" and "thank you" have absolutely 0 ideas of what a fact is.

      "A stick bug doesn't know what it's like to be a stick."

      5 replies →

  • >It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people.

    It might be "perfectly reasonable" to have that as an option, but not as a default. If I want an image of anything other than a human, you'd expect the sterotypes to be fulfilled. If I want a picture of a cellphone, I want an ambiguous black rectangle, even though wacky phones exist[1]

    [1] https://static1.srcdn.com/wordpress/wp-content/uploads/2023/...

    • The stereotype of a human in general would not be white in any case.

      And the stereotype the person asking would expect will heavily depend on where they're from.

      Before you ask for stereotypes: Whose stereotypes? Across which population? And why does those stereotypes make sense?

      I think Google fucked up thoroughly here, but they did so trying to correct for biases also gets things really wrong for a large part of the world.

    • And a stereotype of a phone doesn't have nearly the same historical context or ongoing harmful effects on the world as a racial stereotype.

  • Reality is statistics and as are the models.

    If the data is lumpy in one area then I figure let the model represent the data and allow the human to determine the direction of skew in a transparent way.

    The Nerfing based upon some internal activism that's hidden is frustrating because it'll call into question any result as suspect to bias towards unknown Morlocks at Google.

    For some reason Google intentionally stopped historically accurate images from being generated. Whatever your position, provided you value Truth, these adjustments are abhorrent.

  • It's actually not hard to get these right and these are not stereotypes.

    Try these exact prompts in Midjourney and you will get exactly what you would expect.

  • > It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people

    No, it's not reasonable. It goes against actual history, facts, and collected statistics. It's so ham-fisted and over the top, it reveals something about how ineptly and irresponsibly these decisions were made internally.

    An unfair use of a stereotype would be placing someone of a certain ethnicity in a demeaning context (eg, if you asked for a picture of an Irish person and it rendered a drunken fool).

    The Google wokeness committee bolted on something absurdly crude, seems like "when showing people, always include a black, an asian and an native american person" which rightfully results in a pushback from people who have brains.

  • How is "stereotype" different from "statistical reality"? How does Google get to decide that its training dataset -"the entire internet" - does not fit the statistical distribution over phenotypic features that its own racist ideological commitments require?

  • Really hard to get this right? We're not talking about a mistake here or there. We're talking about it literally refusing to generate pictures of white people in any context. It's very good at not doing that. It seemingly has some kind of supervisory system that forces it to never show white people.

    Google has a history of pushing woke agendas with funny results. For example, there was a whole thing about searching for "happy white man" and "happy black man" a couple years ago. It would always inject black men somewhere in the results searching for white men, and the black man results would have interracial couples. Same kind of thing happened if you searched for women of a particular race.

    The sad thing in all of this is, there is actively racism against white people in hiring at companies like this, and in Hollywood. That is far more serious, because it ruins lives. I hear interviews with writers from Hollywood saying they are explicitly blacklisted and refused work anywhere in Hollywood because they're straight white men. Certain big ESG-oriented investment firms are blowing other people's money to fund this crap regardless of profitability, and it needs to stop.

You mean some people's interpretation of what social justice is.

Depicting Black or Asian or native American people as Nazis is hardly "Social Justice" if you ask me but hey, what do I know :)

  • That's not really the point. The point is that Google are so far down the DEI rabbit hole that facts are seen as much less important than satisfying their narrow yet extremist criteria of what reality ought to be even if that means producing something that bears almost no resemblance to what actually was or is.

    In other words, having diversity everywhere is the prime objective, and if that means you claim that there were Native American Nazis, then that is perfectly fine with these people, because it is more important that your Nazis are diverse than accurately representing what Nazis actually were. In some ways this is the political left's version of "post-truth".

    • I know, the heads of Gemini are white men, but they're constantly doing virtue signalling on twitter about systemic racism, inclusivity, etc. Well, what about hiring black women instead of firing them like Timnit Gebru, you fucking hypocrites? These people make me sick.

It's more accurate to say that it's designed to construct an ideal reality rather than represent the actually existing one. This is the root of many of the cultural issues that the West is currently facing.

“The philosophers have only interpreted the world, in various ways. The point, however, is to change it. - Marx

  • If it constructed an ideal reality it'd refuse to draw nazis etc. entirely.

    It's certainly designed to try to correct for biases, but in doing so sloppily they've managed to make it if anything more racist by falsifying history in ways that e.g. downplays a whole lot of evil by semi-erasing the effects of it from their output.

    Put another way: Either don't draw nazis, or draw historically accurate nazis. Don't draw nazis (at least not without very explicit prompting - I'm not a fan of outright bans) that erases their systemic racism.

  • but the issue here is that it's not a ideal reality, an ideal reality would be fully multicultural and in acceptance of all cultures, here we are presented with a reality where an ethnicity has been singled out and intentionally cancelled, suppressed and underrepresented.

    you may be arguing for an ideal and fair multicultural representation, but it's not what this sistem is representing.

    • it's impossible to reach an ideal reality immediately, and also out of nowhere: there's this thing called history. Google is just _trying_.

      1 reply →

  • > construct an ideal reality rather than represent the actually existing one

    If I ask to generate an image of a couple, would you argue that the system's choice should represent "some ideal" which would logically mean other instances are not ideal?

    If the image is of a white woman and a black man, if I am a lesbian Asian couple, how should I interpret that? If I ask for it to generate an image of image of two white gays kissing and it refuses because it might cause harm or some such nonsense, is it not invalidating who I am as a young white gay teenager? If I'm a black African (vs. say a Chinese African or a white African), I would expect a different depiction of a family than the one American racist ideology would depict because my reality is not that and your idea of what ideal is is arrogant and paternalistic (colonial, racist, if you will).

    Maybe the deeper underlying bug in human makeup is that we categorize things very rigidly, probably due to some evolutionary advantage, but it can cause injustice when we work towards a society where we want your character to be judged, not your identity.

    • I personally think that the generated images should reflect reality as it is. I understand that many think this is philosophically impossible, and at the end of the day humans use judgement and context to solve these problems.

      Philosophically you can dilute and destroy the meaning of terms, and AI that has no such judgement can't generate realistic images. If you ask for an image of "an American family" you can assault the meaning of "American" and "family" to such an extent that you can produce total nonsense. This is a major problems for humans as well, I don't expect AI to be able to solve this anytime soon.

      1 reply →