← Back to context

Comment by dontupvoteme

1 year ago

OpenAI already experienced this backlash when it was injecting words for diversity into prompts (hilariously if you asked for your prompt back it would include the words, and supposedly you could get it to render the extra words onto signs within the image).

How could Google have made the same mistake but worse?

DALL-E is still prompted with diversity in mind. It's just not over the top. People don't mind to receive diverse depictions when they make sense for a given context.

I think it's pretty clear that they're trying to prevent one class of issues (the model spitting out racist stuff in one context) and have introduced another (the model spitting out wildly inaccurate portrayals of people in historical contexts). But thousands of end users are going to both ask for and notice things that your testers don't, and that's how you end up here. "This system prompt prevents Gemini from promoting Naziism successfully, ship it!"

This is always going to be a challenge with trying to moderate or put any guardrails on these things. Their behavior is so complex it's almost impossible to reason about all of the consequences, so the only way to "know" is for users to just keep poking at it.

Allowing a political agenda to drive the programming of the algorithm instead of engineering.

  • It a product that the company has to take responsibility for. Managing that is a no brainer. Tf they don't they suffer endless headlines damaging their brand.

    The only political agenda present is yours. You see everything through the kaleidoscope of your own political grievances.

  • Algorithms and engineering that make non binary decisions inherently have the politics of the creator embedded. Sucks that is life.