← Back to context

Comment by Jensson

1 year ago

> Of course the politically sensitive people are waging war over it.

Just like politically sensitive people waged war over Google identifying an obscured person as a Gorilla. Its just a silly mistake, how could anyone get upset over that?

No one is upset that an algorithm accidentally generated some images, they are upset that Google intentionally designed it to misrepresent reality in the name of Social Justice.

  • “Misrepresenting reality” is an interesting phrase, considering the nature of what we are discussing - artificially generated imagery.

    It’s really hard to get these things right: if you don’t attempt to influence the model at all, the nature of the imagery that these systems are being trained on skews towards stereotype, because a lot of our imagery is biased and stereotypical. It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people.

    In this case it fails because it is not using broader historical and social context and it is not nuanced enough to be flexible about how it obtains the diversity- if you asked it to generate some WW2 American soldiers, you could rightfully include other ethnicities and genders than just white men, but it would have to be specific about their roles, uniforms, etc.

    (Note: I work at Google, but not on this, and just my opinions)

    • > It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people.

      When stereotypes clash with historical facts, facts should win.

      Hallucinating diversity where there was none simply sweeps historical failures under the rug.

      If it wants to take a situation where diversity is possible and highlight that diversity, fine. But that seems a tall order for LLMs these days, as it's getting into historical comprehension.

      21 replies →

    • >It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people.

      It might be "perfectly reasonable" to have that as an option, but not as a default. If I want an image of anything other than a human, you'd expect the sterotypes to be fulfilled. If I want a picture of a cellphone, I want an ambiguous black rectangle, even though wacky phones exist[1]

      [1] https://static1.srcdn.com/wordpress/wp-content/uploads/2023/...

      2 replies →

    • Reality is statistics and as are the models.

      If the data is lumpy in one area then I figure let the model represent the data and allow the human to determine the direction of skew in a transparent way.

      The Nerfing based upon some internal activism that's hidden is frustrating because it'll call into question any result as suspect to bias towards unknown Morlocks at Google.

      For some reason Google intentionally stopped historically accurate images from being generated. Whatever your position, provided you value Truth, these adjustments are abhorrent.

    • It's actually not hard to get these right and these are not stereotypes.

      Try these exact prompts in Midjourney and you will get exactly what you would expect.

    • > It seems perfectly reasonable to say that generated imagery should attempt to not lean into stereotypes and show a diverse set of people

      No, it's not reasonable. It goes against actual history, facts, and collected statistics. It's so ham-fisted and over the top, it reveals something about how ineptly and irresponsibly these decisions were made internally.

      An unfair use of a stereotype would be placing someone of a certain ethnicity in a demeaning context (eg, if you asked for a picture of an Irish person and it rendered a drunken fool).

      The Google wokeness committee bolted on something absurdly crude, seems like "when showing people, always include a black, an asian and an native american person" which rightfully results in a pushback from people who have brains.

    • How is "stereotype" different from "statistical reality"? How does Google get to decide that its training dataset -"the entire internet" - does not fit the statistical distribution over phenotypic features that its own racist ideological commitments require?

    • Really hard to get this right? We're not talking about a mistake here or there. We're talking about it literally refusing to generate pictures of white people in any context. It's very good at not doing that. It seemingly has some kind of supervisory system that forces it to never show white people.

      Google has a history of pushing woke agendas with funny results. For example, there was a whole thing about searching for "happy white man" and "happy black man" a couple years ago. It would always inject black men somewhere in the results searching for white men, and the black man results would have interracial couples. Same kind of thing happened if you searched for women of a particular race.

      The sad thing in all of this is, there is actively racism against white people in hiring at companies like this, and in Hollywood. That is far more serious, because it ruins lives. I hear interviews with writers from Hollywood saying they are explicitly blacklisted and refused work anywhere in Hollywood because they're straight white men. Certain big ESG-oriented investment firms are blowing other people's money to fund this crap regardless of profitability, and it needs to stop.

  • You mean some people's interpretation of what social justice is.

    • But also misinterpretations of what the history is. As I write this there's someone laughing at an image of black people in Scotland in the 1800s[1].

      Sure, there's a discussion that can be had about a generic request generating an image of a black Nazi. The thing is, to me, complaining about a historically correct example is a good argument for why this kind of thing can be important.

      [1] https://news.ycombinator.com/item?id=39467206

  • Depicting Black or Asian or native American people as Nazis is hardly "Social Justice" if you ask me but hey, what do I know :)

    • That's not really the point. The point is that Google are so far down the DEI rabbit hole that facts are seen as much less important than satisfying their narrow yet extremist criteria of what reality ought to be even if that means producing something that bears almost no resemblance to what actually was or is.

      In other words, having diversity everywhere is the prime objective, and if that means you claim that there were Native American Nazis, then that is perfectly fine with these people, because it is more important that your Nazis are diverse than accurately representing what Nazis actually were. In some ways this is the political left's version of "post-truth".

      1 reply →

  • It's more accurate to say that it's designed to construct an ideal reality rather than represent the actually existing one. This is the root of many of the cultural issues that the West is currently facing.

    “The philosophers have only interpreted the world, in various ways. The point, however, is to change it. - Marx

    • If it constructed an ideal reality it'd refuse to draw nazis etc. entirely.

      It's certainly designed to try to correct for biases, but in doing so sloppily they've managed to make it if anything more racist by falsifying history in ways that e.g. downplays a whole lot of evil by semi-erasing the effects of it from their output.

      Put another way: Either don't draw nazis, or draw historically accurate nazis. Don't draw nazis (at least not without very explicit prompting - I'm not a fan of outright bans) that erases their systemic racism.

    • but the issue here is that it's not a ideal reality, an ideal reality would be fully multicultural and in acceptance of all cultures, here we are presented with a reality where an ethnicity has been singled out and intentionally cancelled, suppressed and underrepresented.

      you may be arguing for an ideal and fair multicultural representation, but it's not what this sistem is representing.

      2 replies →

    • > construct an ideal reality rather than represent the actually existing one

      If I ask to generate an image of a couple, would you argue that the system's choice should represent "some ideal" which would logically mean other instances are not ideal?

      If the image is of a white woman and a black man, if I am a lesbian Asian couple, how should I interpret that? If I ask for it to generate an image of image of two white gays kissing and it refuses because it might cause harm or some such nonsense, is it not invalidating who I am as a young white gay teenager? If I'm a black African (vs. say a Chinese African or a white African), I would expect a different depiction of a family than the one American racist ideology would depict because my reality is not that and your idea of what ideal is is arrogant and paternalistic (colonial, racist, if you will).

      Maybe the deeper underlying bug in human makeup is that we categorize things very rigidly, probably due to some evolutionary advantage, but it can cause injustice when we work towards a society where we want your character to be judged, not your identity.

      2 replies →

The real reason is because it shows the heavy "diversity" bias Google has, and this has real implications for a lot of situations because Google is big and for most people a dream job.

Understanding that your likelihood of being hired into the most prestigious tech companies is probably hindered if you don't look "diverse" or "female" angers people. This is just one sign/smell of it, and so it causes outrage.

Evidence that the overlords who control the internet are censoring images, results, and thoughts that don't conform to "the message" is disturbing.

Imagine there was a documentary about Harriet Tubman and it was played by an all-white cast and written by all-white writers. What's there to be upset about? Its just art. Its just photons hitting neurons after all, who cares what the wavelength is? The truth is that it makes people feel their contributions and history aren't being valued, and that has wider implications.

Those implications are present because tribalism and zero-sum tactics are the default operating system for humans. We attempt to downplay it, but its always been the only reality. For every diversity admission to university, that means someone else didn't get that entry. For every "promote because female engineer" that means another engineer worked hard for naught. For every white actor cast in the Harriett Tubman movie, there was a black actor/writer who didn't get the part -- so it ultimately comes down to resources and tribalism which are real and concrete, but are represented in these tiny flashpoints.

  • > Google is big and for most people a dream job

    I wonder how true this is nowadays. I had my foot out the door after 2016 when things started to get extremely politically internally (company leadership crying on stage after the election results really sealed it for me). Something was lost at that point and it never really returned to the company it was a few years prior.

  • You touched on it briefly but a big problem is that it undermines truly talented people who belong to underrepresented groups. Those individuals DO exist, I interview them all the time and they deserve to know they got the offer because they were excellent and passed the bar, not because of a diversity quota.

Engineers can easily spend more time and effort dealing with these 'corner cases' than they do building the whole of the rest of the product.

  • This isn’t a corner case it injects words like inclusive or diverse into the prompt right in front of you. “A German family in 1820” because “a diverse series of German families”

  • They were clearly willing to spend time adjusting the knobs in order to create the situation we see now.