← Back to context

Comment by Adrig

1 year ago

You conveniently use mild examples when I'm talking about harmful stereotypes. Reinforcing bulky NFL players won't lead to much, reinforcing minorities stereotypes can lead to lynchings or ethnic cleansing in some part of the world.

I don't object to anything, and definitely don't side with Google on this solution. I just agree with the parent comment saying it's a subtle problem.

By the way, the data fed to AIs is neither accurate nor factual. Its bias has been proven again and again. Even if we're talking about data from studies (like the example I gave), its context is always important. Which AIs don't give or even understand.

And again, there is the open question of : do we want to use the average representation every time? If I'm teaching to my kid that stealing is bad, should the output be from a specific race because a 2014 study showed they were more prone to stealing in a specific American state? Does it matter in the lesson I'm giving?

> can lead to lynchings or ethnic cleansing in some part of the world

Have we seen any lynchings based on AI imagery?

No

Have we seen students use google as an authoritative source?

Yes

So i'd rather students see something realistic when asking for "founding fathers". And yes, if a given race/sex/etc are very overrepresented in a given context, it SHOULD be shown. The world is as it is. Hiding it is self-deception and will only lead to issues. You cannot fix a problem if you deny its existence.