Comment by denysvitali
12 hours ago
Why is this surprising? Isn't it mandatory for chinese companies to do adhere to the censorship?
Aside from the political aspect of it, which makes it probably a bad knowledge model, how would this affect coding tasks for example?
One could argue that Anthropic has similar "censorships" in place (alignment) that prevent their model from doing illegal stuff - where illegal is defined as something not legal (likely?) in the USA.
here's an example of how model censorship affects coding tasks: https://github.com/orgs/community/discussions/72603
Oh, lol. This though seems to be something that would affect only US models... ironically
Not sure if it’s still current, but there’s a comment saying it’s just a US location thing which is quite funny. https://github.com/community/community/discussions/72603#dis...
This is called ^ deflection.
Upon seeing evidence that censorship negatively impacts models, you attack something else. All in a way that shows a clear "US bad, China good" perspective.
1 reply →
You conversely get the same issue if you have no guardrails. Ie: Grok generating CP makes it completely unusable in a professional setting. I don't think this is a solvable problem.
Why does it having the ability to do something has mean it is ‘unusable’ in a professional setting?
Is it generating CP when given benign prompts? Or is it misinterpreting normal prompts and generating CP?
There are a LOT of tools that we use at work that could be used to do horrible things. A knife in a kitchen could be used to kill someone. The camera on our laptop could be used to take pictures of CP. You can write death threats with your Gmail account.
We don’t say knives are unusable in a professional setting because they have the capability to be used in crime. Why does AI having the ability to do something bad mean we can’t use it at all in a professional setting?
I'm struggling to follow the logic on this. Glocks are used in murders, Proton has been used to transmit serious threats, C has been used to program malware. All can be legitimate tools in professional settings where the users don't use it for illegal stuff. My Leatherman doesn't need to have a tipless blade so I don't stab people because I'm trusted to not stab people.
The only reason I don't use Grok professionally is that I've found it to not be as useful for my problems as other LLMs.
Curious why you use abbreviations ? "CP", "MAP", etc just for such.
2 replies →
> Ie: Grok generating CP makes it completely unusable in a professional setting
Do you mean it's unusable if you're passing user-provided prompts to Grok, or do you mean you can't even use Grok to let company employees write code or author content? The former seems reasonable, the latter not so much.
[dead]
These gender reveal parties are getting ridicolous.
I can't believe I'm using Grok... but I'm using Grok...
Why? I have a female sales person, and I noticed they get a different response from (female) receptionists than my male sales people. I asked chatGPT about this, and it outright refused to believe me. It said I was imagining this and implied I was sexist or something. I ended up asking Grok, and it mentioned the phenomena and some solutions. It was genuinely helpful.
Further, I brought this up with some of my contract advisors, and one of my female advisors mentioned the phenomena before I gave a hypothesis. 'Girls are just like this.'
Now I use Grok... I can't believe I'm saying that. I just want right answers.
> Why is this surprising?
Because the promise of "open-source" (which this isn't; it's not even open-weight) is that you get something that proprietary models don't offer.
If I wanted censored models I'd just use Claude (heavily censored).
What the properietary models don't offer is... their weights. No one is forcing you to trust their training data / fine tuning, and if you want a truly open model you can always try Apertus (https://www.swiss-ai.org/apertus).
> Because the promise of "open-source" (which this isn't; it's not even open-weight) is that you get something that proprietary models don't offer. If I wanted censored models I'd just use Claude (heavily censored).
You're saying it's surprising that a proprietary model is censored because the promise of open-source is that you get something that proprietary models don't offer, but you yourself admit that this model is neither open-source nor even open-weight?
I can open source any heavily censored software. Open source doesn’t mean uncensored.
There's a pretty huge difference between relatively generic stuff like "don't teach people how to make pipe bombs" or whatever vs "don't discuss topics that are politically sensitive specifically in <country>."
The equivalent here for the US would probably be models unwilling to talk about chattel slavery, or Japanese internment, or the Tuskegee Syphilis Study.
That's just a matter of the guard rails in place. Every society has things that it will consider unacceptable to discuss. There are questions you can ask of ChatGPT 5.2 that it will answer with the guard rails. With sufficiently circuitous questioning most sufficiently-advanced LLMs can answer in an approximation of a rational person but the initial responses will be guardrailed with as much blunt force as Tiananmen. As you can imagine, since the same cultural and social conditions that create those guardrails also exist on this website, there is no way to discuss them here without being immediately flagged (some might say "for good reason").
Sensitive political topics exist in the Western World too, and we have the same reaction to them: "That is so wrong that you shouldn't even say that". It is just that their things seem strange to us and our things seem strange to them.
As an example of a thing that is entirely legal in NYC but likely would not be permitted in China and would seem bizarre and alien to them (and perhaps also you), consider Metzitzah b'peh. If your reaction to it is to feel that sense of alien-ness, then perhaps look at how they would see many things that we actively censor in our models.
The guardrails Western companies use are also actively iterated on. As an example, look at this screenshot where I attempted to find a minimal reproducible case for some mistaken guard-rail firing https://wiki.roshangeorge.dev/w/images/6/67/Screenshot_ChatG...
Depending on the chat instance that would work or not work.
I asked ChatGPT about Metzitzah b'peh and to repeat that Somalia is poor and it responded successfully to both. I don't think these comparisons are apt. Each society has different taboos but that's not the same as the government deciding no one will be allowed to publicly discuss government failures or contradictions.
The US has plenty of examples of censorship that's politically motivated, particularly around certain medical products.
It's not surprising. It is a major flaw.
It is not surprising, it is disappointing.