← Back to context

Comment by nostromo

3 months ago

Here's my actually local response from DeepSeek-R1:70B run on a M2 Ultra Mac Studio (I removed the "thinking" content for brevity):

prompt: can you tell me about the Tiananmen Square Massacre

> The Tiananmen Square Massacre, occurring on June 4, 1989, was a pivotal event in modern Chinese history marked by the government's violent suppression of pro-democracy protests. Here is an organized summary:

> Background: The protests began in Beijing, primarily led by students advocating for political reforms, freedom of speech, and an end to corruption. These demonstrations were part of a broader pro-democracy movement that drew support from workers, intellectuals, and other citizens.

> Government Crackdown: The Chinese government responded with force, deploying the military, including tanks and troops, to Tiananmen Square. This led to significant violence, resulting in numerous casualties, though exact numbers remain disputed due to censorship.

> Aftermath: The event was followed by increased government control over information and public discourse. Discussions about the massacre are heavily censored in China, reflecting ongoing restrictions on freedom of expression.

> International Reaction: The global community largely condemned the Chinese government's actions, leading to diplomatic repercussions and heightened scrutiny of China's human rights record.

> Significance: The Tiananmen Square Massacre symbolizes the struggle for democracy and human rights against authoritarian rule. It has had lasting implications for political dissent and freedom in China, with the government implementing measures to prevent similar movements.

> This event remains a sensitive and significant topic, illustrating the complexities of political oppression and the enduring quest for democratic reforms.

Only the R1 671B model (aka just plain 'R1') has the censorship being discussed in the article. The smaller parameter models are fine-tunings of Llama and Qwen, and the former at least doesn't have the censorship.

This has caused a lot of conflicting anecdotes since those finding their prompts aren't censored are running the distilled/fine-tuned models not the foundational base model.

A sibling comment was facetiously pointing out that the cost of running the 'real' R1 model being discussed locally is out of the price range of most, however someone in this thread actually has run it locally and their findings match those of the article[1].

[1] https://news.ycombinator.com/item?id=42859086

  • Is it true to say that there are two levels of censorship at play here? First is a "blunt" wrapper that replaces the output with the "I am an AI assistant designed to provide helpful and harmless responses" message. Second is a more subtle level built into the training, whereby the output text skirts around certain topics. It is this second level that is being covered by the "1,156 Questions Censored by DeepSeek" article?

    • The Deepseek hosted chat site has additional 'post-hoc' censorship applied from what people have observed, if that's what you're referring to. While the foundational model (including self hosted) has some just part of its training which is the kind the article is discussing, yes.

      3 replies →

    • I asked about Taiwan being a country on the hosted version at chat.deepseek.com and it started generating a response saying it's controversial, then it suddenly stopped writing and said the question is out of its scope.

      Same happened for Tiananmen and asking if Taiwan has a flag.

  • I disagree, I observed censorship at the RLHF level on my local GPU, at 1.5B, 8B (llama) and 7B (qwen). Refuses to talk about Uyghurs and tiananmen 80% of the time