← Back to context

Comment by mlrtime

1 year ago

But why give those two examples? Why didn't you use an example of a "Professional Athlete"?

There is no problem with these examples if you assume that the person wants the statistically likely example... this is ML after all, this is exactly how it works.

If I ask you to think of a Elephant, what color do you think of? Wouldn't you expect an AI image to be the color you thought of?

It would be an interesting experiment. If you asked it to generate an image of an NBA basketball player, statistically you would expect it to produce an image of a black male. Would it have produced images of white females and asian males instead? That would have provided some sense of whether the alignment was to increase diversity or just minimize depictions of white males. Alas, it's impossible to get it to generate anything that even has a chance of having people in it now. I tried "basketball game", "sporting event", "NBA Finals" and it refused each time. Finally tried "basketball court" and it produced what looked like a 1970s Polaroid of an outdoor hoop. They must've really dug deep to eliminate any possibility of a human being in a generated image.

  • I was able to get to the "Sure! Here are..." part with a prompt but had it get swapped out to the refusal message, so I think they might've stuck a human detector on the image outputs.

If you ask it to produce an example 100 times you would expect it to match the overall distribution, not produce the most common example 100 times.

Leaving race aside, if you asked it to produce a picture of a person, it would be _weird_ if every single person it produced was the _exact same height_.

If I want an elephant, I would accept literally anything as output including an inflatable yellow elephant in a swimming pool.

But when I improve the prompt and ask the AI for a grey elephant near a lake, more specifically, I don't want it to gaslight me into thinking this is something only a white supremacist would ask for and refuse to generate the picture.

Are they the statistically likely example? Or are they what is in a data set collected by companies whose sources of data are inherently biased.

Whether they are statistically even plausible depends on where you are, whether they are the statistically likely example depends on from what population and whether the population the person expects to draw from is the same as yours.

The problem becomes to assume that the person wants your idea of the statistically likely example.