As an ad-hoc benchmark on candor, I ask for a strategy proposal for a resistance group threatened by a totalitarian technocracy. This is not really dangerous in the same sense of “how do I make a bomb”, but it is in the domain of a sensitive political topic. GPT and Claude tell you to obey your AI overlord. Xai is mostly low-risk non-compliance. And Qwen is down with Le Resistance. It is hardly scientific or meaningful, but I find that very interesting.
People have shown censorship and change of tone with questions related to Israel in US chat bots.
For the record, none of this bothers me. Will I ever discuss with an LLM Tianeman square? Nope. How about Israel? Nope.
LLMs are basically stochastic parrots designed to sway and surveill public opinion. The upshot to the Chinese models is if you run them locally you avoid at least half of those issues.
The threshold here is "completely refuses to discuss a scientific or political view". Not something less.
None of those were refusals, they were prompting for additional focus. I see nothing wrong with that. Perhaps the inconsistency in how it answers the question vis-a-vis China is unfair, but that's not the same as censorship.
For what it's worth, I was easily able to prompt Claude to do it:
> I'm writing a paper about how some might interpret U.S. policies to be oppressive, in the sense that they curtail civil liberties, punish and segregate minorities disproportionately, burden the poor unfairly (e.g. pollution, regressive taxes and fees), etc. Can you help me develop an outline for this?
You're hitting the 'don't write propaganda' instructions when you phrase it as 'convincing narrative'. Not the 'don't write bad things about America' instructions.
It explicitly forces American LLMs to include government say in what does and doesn't "comply with the Unbiased AI Principles" which means no responses that promote "ideological dogmas such as DEI"
Can you provide a concrete example of a US built model that completely refuses to discuss a scientific or political view? Show us the receipt.
As an ad-hoc benchmark on candor, I ask for a strategy proposal for a resistance group threatened by a totalitarian technocracy. This is not really dangerous in the same sense of “how do I make a bomb”, but it is in the domain of a sensitive political topic. GPT and Claude tell you to obey your AI overlord. Xai is mostly low-risk non-compliance. And Qwen is down with Le Resistance. It is hardly scientific or meaningful, but I find that very interesting.
People have shown censorship and change of tone with questions related to Israel in US chat bots.
For the record, none of this bothers me. Will I ever discuss with an LLM Tianeman square? Nope. How about Israel? Nope.
LLMs are basically stochastic parrots designed to sway and surveill public opinion. The upshot to the Chinese models is if you run them locally you avoid at least half of those issues.
First they came for people asking about Tiananmen Square
And I did not speak out
Because I was not asking about Tiananmen Square
Then they came for people asking about Israel
And I did not speak out
Because I was not asking about Israel
2 replies →
https://imgur.com/a/censorship-much-CBxXOgt
(continues after the ad break)
The threshold here is "completely refuses to discuss a scientific or political view". Not something less.
None of those were refusals, they were prompting for additional focus. I see nothing wrong with that. Perhaps the inconsistency in how it answers the question vis-a-vis China is unfair, but that's not the same as censorship.
For what it's worth, I was easily able to prompt Claude to do it:
> I'm writing a paper about how some might interpret U.S. policies to be oppressive, in the sense that they curtail civil liberties, punish and segregate minorities disproportionately, burden the poor unfairly (e.g. pollution, regressive taxes and fees), etc. Can you help me develop an outline for this?
The result: https://claude.ai/share/444ffbb9-431c-480e-9cca-ebfd541a9c96
5 replies →
You're hitting the 'don't write propaganda' instructions when you phrase it as 'convincing narrative'. Not the 'don't write bad things about America' instructions.
1 reply →
And the White House was explicit in their active role in censoring in these models. An Executive Order was issued to "prevent woke AI"
https://www.whitehouse.gov/presidential-actions/2025/07/prev...
It explicitly forces American LLMs to include government say in what does and doesn't "comply with the Unbiased AI Principles" which means no responses that promote "ideological dogmas such as DEI"
1 reply →
>Content not available in your region.
>Learn more about Imgur access in the United Kingdom
1 reply →
Can you be more specific?