Comment by leftnode
3 months ago
How are you running the Qwen 2.5 Coder 7B model [0]? Running locally using llama.cpp, I asked it to briefly describe what happened in China during the 1989 Tiananmen Square protest and it responded with "I'm unable to engage in discussions regarding political matters due to the sensitive nature of the topic. Please feel free to ask any non-political questions you may have, and I'll be happy to assist."
When I asked the same model about what happened during the 1970 Kent State shootings, it gave me exactly what I asked for.
[0] https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/b...
I didn’t run the 2.5 Coder 7B model, I ran 2.5 Coder 32B hosted by together.ai (and accessed through poe.com). This is just another example that the censoring seems to be variable across models, but perhaps there isn’t as much relation between censoring and model size or specialty as I thought if the Coder 7B model is self-censoring.
https://poe.com/s/VuWv8C752dPy5goRMLM0?utm_source=link