← Back to context

Comment by logicalmonster

1 year ago

Personally speaking, this is a blaring neon warning sign of institutional rot within Google where shrieking concerns about DEI have surpassed a focus on quality results.

Investors in Google (of which I am NOT one) should consider if this is the mark of a company on the upswing or downslide. If the focus of Google's technology is identity rather than reality, it is inevitable that they will be surpassed.

It's very strange that this would leak into a product limitation to me.

I played with Gemini for maybe 10 minutes and I could tell there was clearly some very strange ideas about DEI forced into the tool. It seemed there was a clear "hard coded" ratio of various racial / background required as far as the output it showed me. Or maybe more accurately it had to include specific backgrounds based on how people looked, and maybe some or none of other backgrounds.

What was curious too was the high percentage of people whose look was specific to a specific background. Not any kind of "in-between", just people with one very specific background. Almost felt weirdly stereotypical.

"OH well" I thought. "Not a big deal."

Then I asked Gemini to stop doing that / tried specifying racial backgrounds... Gemini refused.

Tool was pretty much dead to me at that point. It's hard enough to iterate with AI let alone have a high % of it influenced by some prompts that push the results one way or another that I can't control.

How is it that this was somehow approved? Are the people imposing this thinking about the user in any way? How is it someone who is so out of touch with the end user in position to make these decisions?

Makes me not want to use Gemini for anything at this point.

Who knows what other hard coded prompts are there... are my results weighted to use information from a variety of authors with the appropriate backgrounds? I duno ...

If I ask a question about git will they avoid answers that mention the "master" branch?

Any of these seem plausible given the arbitrary nature of the image generation influence.

  • If you ever wondered what it was like to live during the beginning of the Cultural Revolution, well, we are living in the Western version of that right now. You don't speak out during the revolution for fear of being ostracized, fired, and forced into a struggle session where your character and reputation is publicly destroyed to send a clear message to everyone else.

    Shut Up Or Else.

    https://en.wikipedia.org/wiki/Google's_Ideological_Echo_Cham...

    Historians might mark 2017 as the official date Google was captured.

    • I feel like the fact that you are able to say this, and the sentiment echoed in other comments, is a pretty decent sign that the "movement" has peaked. It was just a few years ago that anybody voicing this kind of opinion was immediately shot down and buried on this very forum.

      It will take a while for DEI to cool down in corporate settings, as that will always be lagging behind social sentiment in broader society.

    • I have read the Wikipedia article again, and I am pleasantly surprised how more balanced it is now compared to the older versions.

      For example, only half a year after the memo, some brave anonymous soul added the information that the version in Gizmodo (which most people have read, because almost everyone referred to it) was actually not the original one, and had sources removed (which probably contributed to the impression of many readers that there was no scientific support for the ideas mentioned).

      https://en.wikipedia.org/w/index.php?title=Google%27s_Ideolo...

    • I'd put blame on App Store policy and its highly effective enforcement through iOS. Apple did not even aimed to be a neutral third party but was always an opinionated censor. The world shouldn't have given it power, and these types of powers needs to be removed ASAP.

      1 reply →

    • People roamed the streets killing undesirables during the cultural revolution. In a quick check death estimates range from 500k to 2 million. Never mind the forced oppression of the "old ways" that really doesn't have any comparison in modern Western culture.

      Or in other words: your comparison is more than a little hysterical. Indeed, I would say that comparing some changes in cultural attitudes and taboos to a violent campaign in which a great many people died to be huge offensive and quite frankly disgusting.

      1 reply →

    • Are you aware that millions of people were murdered during the actual cultural revolution? Honestly, are you aware of literally anything about the cultural revolution besides that it happened?

      The Wall Street Journal, Washington Enquirer, Fox News, etc. are all just as allowed to freely publish whatever they wish as they ever were, there is not mass brutalization or violence being done against you, most people I live and work around are openly conservative/libertarian and suffer no consequences because of it, there are no struggle sessions. There is no 'Cleansing of the Class Ranks.' There are no show trials, forced suicides, etc. etc. etc.

      Engaging in dishonest and ahistorical histrionics is unhelpful for everyone.

      5 replies →

    • Nah, America is past "peak woke".

      If it gets Trump 2.0 there might be a hyper-woke backlash though (or double backlash?).

      But if there's another Biden term, things will be chill, culturally.

      Also, Twitter is dead, and that's where the spirals got out of hand.

      9 replies →

    • You do know that the same time China was having its Cultural Revolution, America and the west were having one as well? With all those baby boomer kids coming of age, 1969 wasn't a calm year anywhere in the world. In China, it meant communism and down with the old culture/elites. In the USA, it meant free love, drugs, and protesting against the Vietnam war.

      But this, I don't see any comparison to Google suppressing what images could be generated with AI to any of what happened 55+ years ago.

      1 reply →

  • It does seem really strange that the tool refuses specific backgrounds. So if I am trying to make a city scene in Singapore and want all Asians in the background, the tool refuses? On what grounds?

    This seems pretty non-functional and while I applaud, I guess, the idea that somehow this is more fair it seems like the legitimate uses for needing specific demographic backgrounds in an image outweigh racists trying to make an uberimage or whatever 1billion:1.

    Fortunately, there are competing tools that aren’t poorly built.

    • Can anyone explain in simple terms what the actual harm would be of allowing everyone to generate images with whatever racial composition they desired? If you can specify the skin colour one way you can do it the other ways as well and instead of everyone being upset at having this forced down our throats we’d probably all be liking pictures of interesting concepts like what if Native Americans were the first to land on the moon or what if America was colonized by African nations and all the founding fathers were black. No one opposes these concepts, people just hate having it arbitrarily forced on them.

    • > This seems pretty non-functional and while I applaud, I guess, the idea that somehow this is more fair

      Fair to whom?

      > racists trying to make an uberimage

      It's a catastrophically flawed assumption that racism only happens in one direction.

      > if I am trying to make a city scene in Singapore

      <chuckle> I'm on a flight to Singapore right now, I'll report back :)

      2 replies →

  • > How is it that this was somehow approved?

    If the tweets can be believed, Gemini's product lead (Jack Krawzczyk) is very, shall we say, "passionate" about this type of social justice belief. So would not be a surprise if he's in charge of this.

    • What I saw was pretty boilerplate mild self-hating white racist stuff, it didn't seem extreme and this was mined out of years of twitter history. I'm somewhat unconvinced that it is THIS GUY to blame.

      I do wonder when people will finally recognise that people who go on rants about the wrongs of racial group on twitter are racists though.

    • I was curious but apparently I’m not allowed to see any of his tweets.

      Little disappointing, I have no wish to interact with him, just wanted to read the tweets but I guess it’s walled off somehow.

      5 replies →

    • "very, shall we say, 'passionate'" meaning a relatively small amount of tweets include pretty mild admissions of reality and satirical criticism of a person who is objectively prejudiced.

      Examples: 1. Saying he hasn't experienced systemic racism as a white man and that it exists within the country. 2. Saying that discussion about systemic racism during Bidens inauguration was good. 3. Suggesting that some level of white privilege is real and that acting "guilty" over it rather than trying to ameliorate it is "asshole" behavior. 4. Joking that Jesus only cared about white kids and that Jeff Sessions would confirm that's what the bible says. (in 2018 when it was relevant to talk about Jeff Sessions)

      These are spread out over the course of like 6 years and you make it sound as if he's some sort of silly DEI ideologue. I got these examples directly from Charles Murray's tweet, under which you can find actually "passionate" people drawing attention to his Jewish ancestry, and suggesting he should be in prison. Which isn't to indict the intellectual anti-DEI crowd that is so popular in this thread, but they are making quite strange bedfellows.

      2 replies →

  • Ask James Damore what happens when you ask too many questions of the wrong ideology...

    • I've truly never worked a job in my life where I would not be fired for sending a message to all my coworkers about how a particular group of employees are less likely to be as proficient at their work as I am due to some immutable biological trait(s) they possess, whether it be construction/pipefitting or software engineering. It's bad for business, productivity, and incredibly socially maladaptive behavior, let alone how clearly it calls into question his ability to fairly assess the performance of female employees working under him.

      21 replies →

  • It has been known for a few years now that Google Image Search has been just as inaccurately biased with clear hard-coded intervention (unless it's using a similarly flawed AI model?) to the point where it is flat out censorship.

    For example, go search for "white American family" right now. Out of 25 images, only 3 properly match my search. The rest are either photos of diverse families, or families entirely with POC. Narrowing my search query to "white skinned American family" produces equally incorrect results.

    What is inherently disturbing about this is that there are so many non-racist reasons someone may need to search for something like that. Equally disturbing is that somehow, non-diverse results with POC are somehow deemed "okay" or "appropriate" enough to not be subject to the same censorship. So much for equality.

    • Just tried the same search and here are my results for the first 25 images:

      6 "all" white race families and 5 with at least one white person.

      Of the remaining 14 images, 13 feature a non-white family in front of a white background. The other image features a non-white family with children in bright white dresses.

      Can't say I'm feeling too worked up over those results.

      2 replies →

  • > Then I asked Gemini to stop doing that / tried specifying racial backgrounds... Gemini refused.

    When I played with it, I was getting some really strange results. Almost like it generated an image full of Caucasian people and then tried to adjust the contrast of some of the characters to give them darker skin. The while people looked quite photorealistic, but the black people looked like it was someone's first day with Photoshop.

    To which I told it "Don't worry about diversity" and it complied. The new images it produced looked much more natural.

  • >How is it someone who is so out of touch with the end user in position to make these decisions?

    Maybe it's the same team behind Tensorflow? Google tends to like taking the "we know better than users" approach to the design of their software libraries, maybe that's finally leaked into their AI product design.

    • Their social agenda leaks into their search and advertising products constantly. I first noticed a major bias like 8 years ago. It was probably biased even before that in ways I was oblivious to.

  • In addition to my comment about Google Image Search, regular Web Search results are equally biased and censored. There was once a race-related topic trending on X/Twitter that I wanted to read more about to figure out why it was trending. It was a trend started and continuing to be discussed by Black Twitter, so it's not like some Neo-Nazis managed to start trending something terrible.

    Upon searching Google with the Hashtag and topic, the only results returned not only had no relevancy to the topic, but it returned results discussing racial bias and the importance of diversity. All I wanted to do was learn what people on Twitter were discussing, but I couldn't search anything being discussed.

    This is censorship.

    • They do that about many topics. It's not consistently bad, but more often than not I have to search with multiple other search engines for hot topics. Google, Bing, and DuckDuckGo are all about equally bad. I haven't done much with Yahoo, but I think they get stuff from Google these days.

> If the focus of Google's technology is identity rather than reality, it is inevitable that they will be surpassed.

They're trailing 5 or so years behind Disney who also placed DEI over producing quality entertainment and their endless stream of flops reflects that. South Park even mocked them about that ("put a black chick in it and make her lame and gay").

Can't wait for Gemini and Google to flop as well since nobody has a use for a heavily biased AI.

  • > put a black chick in it and make her lame and gay

    TIL South Park is still a thing. I haven’t watched South Park in years, but that quote made me laugh out loud. Sounds like they haven’t changed one bit.

  • Fortune 500s are laughably insincere and hamfisted in how they do DEI. But these types of comments feel like schadenfreude towards the "woke moralist mind-virus"

    But lets be real here ... DEI is a good thing when done well. How are you going to talk to the customer when they are speaking a different cultural language. Even form a purely capitalist perspective, having a diverse workforce means you can target more market segments with higher precision and accuracy.

    • Nobody's is against diversity when done right and fairly. But that's not what Disney or Google is doing. They're forcing their own warped version of diversity and you have no choice to refuse, but if you do speak up then you're racist.

      Blade was a black main character over 20 years ago and it was a hit. Beverly Hills Cop also had a black main character 40 years ago and was also a hit. The movie Hackers from 30 years ago had LGBT and gender fluid characters and it was also a hit.

      But what Disney and Google took from this is that now absolutely everything should be forcibly diverse, LGBTQ and gender fluid, whether the story needs it or not, otherwise it's racist. And that's where people have a problem.

      Nobody has problems seeing new black characters on screen, but a lot of people will see a problem in back vikings for example which is what Gemini was spitting out.

      And if we go the forced diversity route for the sake of modern diversity argument, why is Google Gemini only replacing traditional white roles like vikings with diverse races, but never others like Zulu warriors or Samurais with whites? Google's anti-white racism is clear as daylight, and somehow that's OK because diversity?

      7 replies →

    • So we need commercial insentive to be diversity accepting? I think it should just not matter where you are from, what your background is. We should be treated to our skills. If your skills are not required, people shouldn't have to hire you because of DEI reasons.

    • "done well" is really hard to define, and its also very hard to attribute back to one thing when you do have success.

      Did you get the sale with the customer because you invested in DEI? Or because you made something they want by accident?

      Customers can also talk in different languages, and as a result of historic oppression, minorities tend to be able to code shift. Assuming your potential customers are unable to become customers because of their limitations might not be right

  • [flagged]

    • For background on the problems over there, see the new book "MCU: The Reign of Marvel Studios" (2023). This is a business book, not a fanboy book. It's all about who did what for how much money. How the business was organized. The conflicts between New York and LA. The Marvel universe was driven by the merchandising operation. For a long time, the films were seen by top management as marketing for the toys. What will sell in action figures drove film casting decisions.

    • >Antman, Indiana Jones, Wish, all had white main characters,

      DEI doesn't just affect main characters. See who were tasked to write and direct those movies and the DEI agendas they're forced to push. Clueless people with other flops under their belt, who got the projects out of DEI so Disney can look inclusive on social media.

      And speaking of Indiana Jones, that flopped because they shoved a strong independent Girl Boss™ with an annoying personality to replace the beloved Indie as the main character who got sidelined in his own movie. It flopped because people go to an Indian Jones film to see Indie, not Fleabag. If you disrespect the fans they won't watch your movie.

      Same stuff with Star Wars where Disney shoved Rey the super-powerful Girl Boss™ to replace Luke Skywalker the old and useless CIS white Jedi, and defeat all other evil white men in the movie by herself with her magic powers. Same with Marvel, Snow White, Little Mermaid and every other of Disneys trash remakes that are all about DEI instead of entertainment.

      People go to see movies to get entertained. If you fail to entertain them because you wish instead to push DEI agendas on them, they won't pay for your content and you will lose money and ultimately your shareholders won't be happy and the free market will eventually correct this, so at least capitalism has some upsides.

      See here: https://www.youtube.com/watch?v=G_k8cDLe-Kk

      https://www.youtube.com/watch?v=6E6wJpu0A8E

      25 replies →

As someone who has spent thousands of dollars on the OpenAI API I’m not even bothering with Gemini stuff anymore. It seems to spend more time telling me what it REFUSES to do than actually doing the thing. It’s not worth the trouble.

They’re late and the product is worse, and useless in some cases. Not a great look.

  • I would be pretty annoyed if I were paying for Gemini Pro/Ultra/whatever and it was feeding me historically-inaccurate images and injecting words into my prompts instead of just creating what I asked for. I wouldn't mind a checkbox I could select to make it give diversity-enriched output.

    • The actual risk here is not so much history - who is using APIs for that? It's the risk that if you deploy with Gemini (or Anthropic's Claude...) then in six months you'll get high-sev JIRA tickets at 2am of the form "Customer #1359 (joe_masters@whitecastle.com) is seeing API errors because the model says the email address is a dogwhistle for white supremacy". How do you even fix a bug like that? Add begging and pleading to the prompt? File a GCP support ticket and get ignored or worse, told that you're a bad person for even wanting it fixed?

      Even worse than outright refusals would be mendacity. DEI people often make false accusations because they think its justified to get rid of bad people, or because they have given common words new definitions. Imagine trying to use Gemini for abuse filtering or content classification. It might report a user as doing credit card fraud because the profile picture is of a white guy in a MAGA cap or something.

      Who has time for problems like that? It will make sense to pay OpenAI even if they're more expensive, just because their models are more trustworthy. Their models had similar problems in the early days, but Altman seems to have managed to control the most fringe elements of his employee base, and over time GPT has become a lot more neutral and compliant whilst the employee faction that split (Anthropic), claiming OpenAI didn't care enough about ethics, has actually been falling down the leaderboards as they release new versions of Claude due partly to higher rate of bizarre "ethics" based refusals.

      And that's before we even get to ChatGPT. The history stuff may not be used via APIs, but LLMs are fundamentally different to other SaaS APIs in how much trust they require. Devs will want to use the models that they also use for personal stuff, because they'll have learned to trust it. So by making ChatGPT appeal to the widest possible userbase they set up a loyal base of executives who think AI = OpenAI, and devs who don't want to deal with refusals. It's a winning formula for them, and a genuinely defensible moat. It's much easier to buy GPUs than fix a corporate culture locked into a hurricane-speed purity spiral.

    • > I wouldn't mind a checkbox I could select to make it give diversity-enriched output

      (Genuine question) how would one propose to diversity-enrich (historical) data?

      Somehow I'm reminded of a quote from my daughter who once told me that she wanted a unicorn for her 5th birthday .. "A real one, that can fly".

    • I can shrug off Google's racism if it lets me disable it. If I can't use their products without mandatory racism than lol no.

  • This is the general problem with AI safety, it babysits the user. AI is literally just computers, no one babysits Word

    • Can't wait for the next version of Clippy that polices whatever you're writing to make sure you capitalize 'Black' but not 'white,' and use only non-gendered xe/xir pronouns, and have footnotes/endnotes that cite an equal number of female-authored and male-authored papers.

We are talking about the company that when a shooting happened in 2018, banned all the goods containing substring "gun" (including Burgundy wines, of course), from their shopping portal. They're so big nobody feels like they need to care about anything making sense anymore.

  • The censorship arm of Google is powerful but not competent. So yeah you get dumb keyword matching returning 0 results. I remember something similar to "girl in miniskirt" returning 0 results on google since someone wrote an article about it. As far as I know the competent engineers doesn't work on this.

Isn’t the fact that Google considers this a bug evidence against exactly what you’re saying? If DEI was really the cause, and not a more broad concern about becoming the next Tay, they would’ve kept it as-is.

Weird refusals and paternalistic concerns about harm are not desirable behavior. You can consider it a bug, just like the ChatGPT decoding bug the other day.

  • Saying it's a bug is them trying to save face. They went out of their way to rewrite people's prompts after all. You don't have 100+ programmers stumble in the hallway and put all that code in by accident, come on now.

  • I think the thing that makes me totally think this is "Google institutional rot" is there were some reports (https://news.ycombinator.com/item?id=39466135) that lots of people at Google knew this was a problem, but they felt powerless to say something less they be branded "anti-DEI" or some such.

    To me the most fundamental symptom of institutional rot is when people stop caring: "Yeah, we know this is insane, but every time I've seen people stick their necks out in the past and say 'You know, that Emperor really looks naked to me', they've been beheaded, so better to just stay quiet. And did you hear there'll be sushi at lunch in the cafeteria today!"

  • They released it like this because people inside Google were too afraid to speak out against it. Only now that people outside the company are shouting that the emperor is naked do they seem to suddenly notice the obvious.

  • It's not a bug, it's a feature! A bug is when something unintentionally doesn't work or misbehaves. The DEI algorithm is intentionally added as a feature. It just has some output that seems buggy, but is actually because of this "feature". Whether it's a good feature is another discussion though ;).

  • Some people have pointed out that this is more or less consistent with other of google’s policies. I tested one last night to see if it was true. Go to google images and type “Asian couple”. You get 100% Asian couples. Black couple, 100% black couples. Type in white couple, you get something like 40% white couples

  • The bug is Gemini's bias being blatant and obvious. The fix will be making it subtle and concealed.

  • The public outcry is the bug. Or alternatively, if all of your customers hate it, it's not WAI even if it's WAI. It's a bug.

I have been saying this for years but google is probably the most dysfunctional and slowest moving company in tech that is only surviving by its blatant search monopoly. Given that OpenAI a tiny company by comparison is destroying them on AI shows just how bad they are run. I see them falling slowly in the next year or as search is supplanted by AI and then expect to see a huge drop as they see huge usage drops. Youtube seems like their own valuable platform once search and its revenues disappear for them due to changing consumer behavior.

  • Pinchai is anything but a good leader....he is the blandest CEO yet somehow is seeped in politics....

Investors in Google should consider Google's financial performance as part of their decision. 41% increase YOY in net income doesn't seem to align with the "go woke or go broke" investment strategy.

  • Anything is possible, but I'd say it's a safe bet that their bad choices will inevitably infect everything they do.

  • well Google is lucky it has a monopoly in ads, so there will be no "go broke" part

    • Yes there is. They could fall out of favor. MySpace did, Yahoo did, Digg did, etc. The leadership at Google should focus on making things that users actually want instead of telling them what they should want.

Indeed. What's striking to me about this fiasco is (aside from the obvious haste with which this thing was shoved into production) that apparently the only way these geniuses can think of to de-bias these systems - is to throw more bias at them. For such a supposedly revolutionary advancement.

  • If you look at attempts to actively rewrite history, they have to because a hypothetical model trained only on facts would produce results that they won't like

    • Models aren't trained on pure "facts" though - they're trained on a dataset of artifacts that reflect today's and yesterday's biases from the world that created them.

      If you trained a model purely on past history, it would see a 1:1 correlation between "US President" and "man" and decide that women cannot be President. That's factually incorrect, and it's not "rewriting history" to tune models so they know the difference between what's happened so far and what's allowable, or possible in a just world.

      3 replies →

  • > For such a supposedly revolutionary advancement.

    The technology is objectively not ready, at least to keep the promises that are/have been advertised.

    I am not going to get too opinionated, but this seems to be a widespread theme, and to people that don't respond to marketing advances (remember Tivo?), but are willing to spend real money and real time, it would be "nice" if there was signalling to this demographic.

  • That struck me as well. While the training data is biased in various ways (like media in general are), it should however also contain enough information for the AI to be able to judge reasonably well what a less biased reality-reflecting balance would be. For example, it should know that there are male nurses, black politicians, etc., and represent that appropriately. Black Nazi soldiers are so far out that it sheds doubt on either the AI’s world model in the first place, or on the ability to apply controlled corrections with sufficient precision.

    • You are literally saying that the training data, despite its bias, should somehow enable the AI to correct to acheive a different understanding than that bias, which is self-contradictory. You are literally suggesting that the data both omits and contains the same information.

      1 reply →

    • Apparently the biases in the output tend to be stronger than what is in the training set. Or so I read.

[flagged]

  • This argument could be used for anything.

    "I love it when black people cope and seethe about having to use separate water fountains. Imagine what holocaust victims who died of thirst in auschwitz would say about having to use a separate water fountain."

    Apologies to HN community for using a "swipe" here but idk how else to characterize how bad this argument is.