← Back to context

Comment by pgkr

3 months ago

There is bias in the training data as well as the fine-tuning. LLMs are stochastic, which means that every time you call it, there's a chance that it will accidentally not censor itself. However, this is only true for certain topics when it comes to DeepSeek-R1. For other topics, it always censors itself.

We're in the middle of conducting research on this using the fully self-hosted open source version of R1 and will release the findings in the next day or so. That should clear up a lot of speculation.

> LLMs are stochastic, which means that every time you call it, there's a chance that it will accidentally not censor itself.

A die is stochastic, but that doesn't mean there's a chance it'll roll a 7.