← Back to context

Comment by washadjeffmad

1 year ago

The irony is that the training sets are tagged well enough for the models to capture nuanced features and distinguish groups by name. However, a customer only using terms like white or black will never see any of that.

Not long ago, a blogger wrote an article complaining that prompting for "$superStylePrompt photographs of African food" only yielded fake, generic restaurant-style images. Maybe they didn't have the vocabulary to do better, but if you prompt for "traditional Nigerian food" or jollof rice, guess what you get pictures of?

The same goes for South, SE Asian, and Pacific Island groups. If you ask for a Gujarati kitchen or Kyoto ramenya, you get locale-specific details, architectural features, and people. Same if you use "Nordic" or "Chechen" or "Irish".

The results of generative AI are a clearer reflection of us and our own limitations than of the technology's. We could purge the datasets of certain tags, or replace them with more explicit skin melanin content descriptors, but then it wouldn't fabricate subjective diversity in the "the entire world is a melting pot" way someone feels defines positive inclusivity.