Comment by caeril

1 year ago

None of these are "corner cases". The model was specifically RLHF'ed by Google's diversity initiatives to do this.

Do you think Google's diversity team expected it would generate black nazis?

  • > Do you think Google's diversity team expected it would generate black nazis?

    Probably not, but that is precisely the point. They're stubbornly clinging to principles that are rooted in ideology and they're NOT really thinking about consequences to the marginalized and oppressed that their ideas will wreck, like insisting that if you're black your fate is X or if you're white your guilt is Y. To put it differently, they're perpetuating racism in the name of fighting it. And not just racism. They make assumptions of me as a gay man and of my woman colleage and tell everyone else at the company how to treat me.

  • Do you think no one internally thought to try this, but didn't see a problem with it because of their worldview?

  • I don’t think they expected that exact thing framed in that exact way.

    Do I think that the teams involved were institutionally incapable of considering that a plan to increase diversity in image outputs could have negative consequences? Yes, that seems pretty clear to me. The dangers of doing weird racecraft on the backend should have been obvious.

    • I suspect that Google's solution to this mess will be to retain said racecrafting except in negative contexts. That is, `swedish couples from the 1840s` will continue to produce hordes of DEI-compliant images, but `ku klux klansmen` or `nazi stormtroopers` will adhere to the highest standard of historical accuracy.