Comment by femto
3 months ago
Is it true to say that there are two levels of censorship at play here? First is a "blunt" wrapper that replaces the output with the "I am an AI assistant designed to provide helpful and harmless responses" message. Second is a more subtle level built into the training, whereby the output text skirts around certain topics. It is this second level that is being covered by the "1,156 Questions Censored by DeepSeek" article?
The Deepseek hosted chat site has additional 'post-hoc' censorship applied from what people have observed, if that's what you're referring to. While the foundational model (including self hosted) has some just part of its training which is the kind the article is discussing, yes.
Thanks for cutting through the noise. I did some poking around and a discussion from a couple of days ago reached the same conclusion.
https://news.ycombinator.com/item?id=42825573
Is it correct or incorrect that they open-sourced tbeir code? i.e. can anyone with $6M now take the DeepSeek training code, apply it to their dataset of interest, and train a new model that is not censoeed (i.e. even somehow intrinsically to the kodel itself)? Apologies I am not an AI engineer nor even a software engineer of my terminology usage isn't quite spot on.
1 reply →
I asked about Taiwan being a country on the hosted version at chat.deepseek.com and it started generating a response saying it's controversial, then it suddenly stopped writing and said the question is out of its scope.
Same happened for Tiananmen and asking if Taiwan has a flag.