← Back to context

Comment by BoorishBears

17 hours ago

Models are non-deterministic.

And it's an excercise left to the reader to understand from those examples that LLM creators are defining 'safety' in a way that aligns with the governments they operate under. (because they want to do business under those governments.)

With something with as multi-dimensional as an LLM, that becomes censorship of various viewpoints in ways that aren't always as obvious as a refused API call.

You keep saying that word, "censorship." I do not think it means what you think it means.

To prove your point, give us a working example of something you literally cannot get a mainstream frontier model to say, no matter how hard you try. I asked for this before, and there have been no takers yet.

  • Aligning a model in a way that causes it to refuse requests to produce propaganda for one country, but not for another country is what?

    Is there some functionally equivalent word to censorship you'd like to use because of you're naive enough to think US corporations would not self-censor but Chinese corporations would?

    -

    Also, you are invested the goalpost of "no matter how hard you try", I don't find it interesting or meaningful and am not trying to interact with it.

    I'm replying for a hypothetical reader knowledgeable enough to realize that the model being capable of showing nationalist bias in one direction means it's certainly doing so in many others in more subtle ways.

    That's simply the nature of aligning an LLM.

    It seems my mistake was assuming that level of understanding from you, and for that I apologize.

    • Bias and censorship are not identical. The subject of this thread is censorship, not bias.

      Besides, why do you want a model to produce propaganda? Surely you have better things to do.

      1 reply →