Comment by mattlondon

1 year ago

You can guarantee that if it did generate all historical images as only white, there would be equally -loud uproar from the other end of the political spectrum too (apart from perhaps Nazis where I would assume people don't want their race/ethnicity represented).

It seems that basically anything Google does is not good enough for anyone these days. Damned if they do, damned if they don't.

It's not a binary.

Why are the only options "only generate comically inaccurate images to the point of being offensive to probably everyone" or "only generate images of one group of people"?

Are current models so poor that we can't use a preprocessing layer to adapt the prompt aiming for diversity but also adjusting for context? Because even Musk's Grok managed to have remarkably nuanced responses to topics of race when asked racist questions by users in spite of being 'uncensored.'

Surely Gemini can do better than Grok?

Heavy handed approaches might have been necessary with GPT-3 era models, but with the more modern SotA models it might be time to adapt alignment strategies to be a bit more nuanced and intelligent.

Google wouldn't be damned if they'd tread a middle ground right now in between do and don't.

Well, Nazi's are universally bad to the degree if you try to point out one scientific achievement that the Nazi's developed you will are literally Hitler. So I don't think so, there would be no outrage if every Nazi was white in an AI generated image.

Any other context 100% you are right, there would be outrage if there was no diversity.