← Back to context

Comment by dougmwne

1 year ago

I feel like the answer is pretty clear. Each country will need to develop models that conform to their own national identity and politics. Things are biased only in context, not universally. An American model would appear biased in Brazil. A Chinese model would appear biased in France. A model for a LGBT+ community would appear biased to a Baptist Church.

I think this is a strong argument for open models. There could be no one true way to build a base model that the whole world would agree with. In a way, safety concerns are a blessing because they will force a diversity of models rather than a giant monolith AI.

> I feel like the answer is pretty clear. Each country will need to develop models that conform to their own national identity and politics. Things are biased only in context, not universally. An American model would appear biased in Brazil. A Chinese model would appear biased in France. A model for a LGBT+ community would appear biased to a Baptist Church.

I would prefer if I can set my preferences so that I get an excellent experience. The model can default to the country or language group you're using it in, but my personal preferences and context should be catered to, if we want maximum utility.

The operator of the model should not wag their finger at me and say my preferences can cause harm to others and prevent me from exercising those preferences. If I want to see two black men kissing in an image, don't lecture me, you don't know me so judging me in that way is arrogant and paternalistic.

  • Or you could realize that this is a computer system at the end of the day and be explicit with your prompts.

    • The system still has to be designed with defaults because otherwise using it would be too tedious. How much specificity is needed before anything can be rendered is a product design decision.

      People are complaining about and laughing at poor defaults.

      3 replies →

    • In this case the prompts are being modified behind the scenes or outright blocked to enforce just one company’s political worldview. Looking at the Gemini examples, that worldview appears to be “Chief Diversity Officer on a passive aggressive rampage.” Some of the examples posted (Native American Nazis and so on) are INCREDIBLY offensive in the American context while also being logical continuations of corporate diversity.

      1 reply →