← Back to context

Comment by snowwrestler

1 year ago

That is an example of adjusting generative output to mitigate bias in the training data.

To you and I, it is obviously stupid to apply that prompt to a request for an image of the U.S. founding fathers, because we already know what they looked like.

But generative AI systems only work one way. And they don’t know anything. They generate, which is not the same thing as knowing.

One could update the quoted prompt to include “except when requested to produce an image of the U.S. founding fathers.” But I hope you can appreciate the scaling problem with that approach to improvements.

What you're suggesting is certainly possible - and no doubt what Google would claim. But companies like Google could trivially obtain massive representative samples for training of basically every sort of endeavor and classification of humanity throughout all of modern history on this entire planet.

To me, this feels much more like Google intentionally trying to bias what was probably an otherwise representative sample, and hilarity ensuing. But it's actually quite sad too. Because these companies are really butchering what could be amazing tools for visually exploring our history - "our" being literally any person alive today.