← Back to context

Comment by f6v

1 year ago

I agree, but this requires reasoning, the way you did it. Is this within the model capability? If not, there’re two routes. First one: make inference based on real data, then most board will be male and white. Second: hard-core rules based on your social justice views. I think the second is worse than the first one.

Yes this all seems to fall under the category of "well intentioned but quickly goes awry because it's so ham fisted".

If you train your models on real world data, and real world data reflects the world as it is.. then some prompts are going to return non-diverse results. If you force diversity, but in ONLY IN ONE PARTICULAR DIRECTION.. then it turns into the reverse racism stuff the right likes to complain about.

If it outright refuses to show a white male when asked, because you don't allow racial prompts.. that's probably ok if it enforces for all races

But.. If 95% of CEOs are white males, but your AI returns almost no white males.. but 95% of rappers are black males and so it returns black females for that prompt.. your AI has one-way directional diversity bias overcorrection basked in. The fact that it successfully shows 100% black people when asked for say a Kenyan in a prompt, but again can't show white people when asked for 1800s Germans is comedically poorly done.

Look I'm a 100% democrat voter, but this stuff is extremely poorly done here. It's like the worst of 2020s era "silence is violence" and "everyone is racist unless they are anti-racist" overcorrection.

  • disasters like these are exactly what google is scared of, which just makes it even more hilarious that they actually managed to get to this point

    no matter your politics, everyone can agree they screwed up. the question is how long (if ever?) it'll take for people to respect their ai

The problem is that they're both terrible.

Going first route means we get to calcify our terrible current biases in the future, while the latter instead goes for a facile and sanitized version of our expectations.

You're asking a machine for a binary "bad/good" response to complex questions that don't have easy answers. It will always be wrong, regardless of your prompt.