Comment by hersko
1 year ago
It's not difficult to notice that your images are excluding a specific race (which, ironically, most of the engineers building the thing are a part of).
1 year ago
It's not difficult to notice that your images are excluding a specific race (which, ironically, most of the engineers building the thing are a part of).
I'd hazard a guess that the rate at which Google employees type "generate a white nazi" and the rate at which the general Internet does so differs.
It's clear there is a ban on generating white people, and only white people, when asked to do so directly. Which is clearly an intervention from the designers of this system. They clearly did this intentionally and live in such a padded echo chamber that they didn't see a problem with it. They thought they were "helping".
This is a debate between people who want AI to be reflective of reality vs. people who want AI to be reflective of their fantasies of how they wish the world was.
I feel like it's more of a debate about the extent of Google's adversarial testing.
"What should we do about black nazis?" is a pretty basic question.
If they'd thought about that at all, they wouldn't have walked this back so quickly, because they at least would have had a PR plan ready to go when this broke.
That they didn't indicates (a) their testing likely isn't adversarial enough & (b) they should likely fire their diversity team and hire one who does their job better.
Building it like this is one thing. If Google wants to, more power to them.
BUT... building it like this and having no plan of action for when people ask reasonable questions about why it was built this way? That's just not doing their job.