Comment by steveBK123
1 year ago
Yes this all seems to fall under the category of "well intentioned but quickly goes awry because it's so ham fisted".
If you train your models on real world data, and real world data reflects the world as it is.. then some prompts are going to return non-diverse results. If you force diversity, but in ONLY IN ONE PARTICULAR DIRECTION.. then it turns into the reverse racism stuff the right likes to complain about.
If it outright refuses to show a white male when asked, because you don't allow racial prompts.. that's probably ok if it enforces for all races
But.. If 95% of CEOs are white males, but your AI returns almost no white males.. but 95% of rappers are black males and so it returns black females for that prompt.. your AI has one-way directional diversity bias overcorrection basked in. The fact that it successfully shows 100% black people when asked for say a Kenyan in a prompt, but again can't show white people when asked for 1800s Germans is comedically poorly done.
Look I'm a 100% democrat voter, but this stuff is extremely poorly done here. It's like the worst of 2020s era "silence is violence" and "everyone is racist unless they are anti-racist" overcorrection.
disasters like these are exactly what google is scared of, which just makes it even more hilarious that they actually managed to get to this point
no matter your politics, everyone can agree they screwed up. the question is how long (if ever?) it'll take for people to respect their ai