← Back to context

Comment by sweetheart

2 years ago

> then that same amount should ideally be represented in the output

Why? Why should representation in the output reflect actual distributions of race?

I doubt anyone cares if you asked ChatGPT to create a picture of a basketball player and it returned an image of an asian player.

People don't like that it's rewriting prompts to force diversity. So if I ask for a black basketball player, it should return an image of exactly that.

This is a good question.

If I'm asking for quicksort, do I want the most common implementations or do I want an underrepresented impl?

If I'm asking for the history of Egypt, do I want the most common tellings or do I want some underrepresented narrative?

I suppose something like the race of a doctor in some Dalle image ends up being a very special case in the scheme of things, since it's a case where we don't necessarily care.

Maybe the steelman of the idea that you shouldn't special case it can be drawn along these lines, too. But I think to figure that out you'd need to consider examples along the periphery that aren't so obvious unlike "should a generated doctor be black?"