← Back to context

Comment by BoorishBears

13 hours ago

Aligning a model in a way that causes it to refuse requests to produce propaganda for one country, but not for another country is what?

Is there some functionally equivalent word to censorship you'd like to use because of you're naive enough to think US corporations would not self-censor but Chinese corporations would?

-

Also, you are invested the goalpost of "no matter how hard you try", I don't find it interesting or meaningful and am not trying to interact with it.

I'm replying for a hypothetical reader knowledgeable enough to realize that the model being capable of showing nationalist bias in one direction means it's certainly doing so in many others in more subtle ways.

That's simply the nature of aligning an LLM.

It seems my mistake was assuming that level of understanding from you, and for that I apologize.

Bias and censorship are not identical. The subject of this thread is censorship, not bias.

Besides, why do you want a model to produce propaganda? Surely you have better things to do.