← Back to context

Comment by BobaFloutist

2 days ago

Also eliminating non-compliant data might actually just not work, since the one thing everyone knows about AIs is that they'll happily invent anything plausible sounding.

So, for example, if a model was trained with no references to the Tiananmen Square massacre, I could see it just synthesizing commonalities between other massacres and inventing a new, worse Tiananmen Square Massacre. "That's not a thing that ever happened" isn't something most AIs are particularly good at saying.

The irony of implicit connections in training data is funny.

I.e. even if you create an explicit Tiananmen Square massacre-shaped hole in your training data... your other training data implicitly includes knowledge of the Tiananmen Square massacre, so might leak it in subtle ways.

E.g. how there are many posts that reference June 4, 1989 in Beijing with negative and/or horrified tones?

Which at scale, an LLM might then rematerialize into existence.

More likely SOTA censorship focuses on levels above base models in the input/output flow (even if that means running cut-down censoring models on top of base models for every query).

Would be fascinated to know what's currently being used for Chinese audiences, given the consequences of a non-compliant model are more severe.