← Back to context

Comment by bhouston

3 months ago

Be aware that if you run it locally with the open weights there is less censoring than if you use DeepSeek hosted model interface. I confirmed this with the 7B model via ollama.

The censoring is a legal requirement of the state, per:

“Respect for China’s “social morality and ethics” and upholding of “Core Socialist Values” (Art. 4(1))”

https://www.fasken.com/en/knowledge/2023/08/chinas-new-rules...

Models other than the 600b one are not R1. It’s crazy how many people are conflating distilled qwen and llama 1 to 70b models as r1 when saying they’re hosting them locally

The point does stand if you’re talking about using deepseek r1 zero instead which afaik you can try on hyperbolic and it apparently even answers the tianmen square question.