Comment by fc417fc802
1 month ago
Allowing the general public to have access. This is a country with notoriously strict information controls after all.
1 month ago
Allowing the general public to have access. This is a country with notoriously strict information controls after all.
It's the same in the West, just under a more subtle form. You cannot speak, talk and read about all topics.
In France for example, lot of topics will directly cause you legal and social troubles.
There is no freedom of speech like in the US, and as a result the information flow is filtered.
If you don't follow popular opinion, you will lose the state support, the TV channels can get cut (ex: C8), you can get fired from your job, etc.
It's subtle.
Even here, you get flagged, downvoted, and punished for not going with the popular opinion (for example: you lose investment opportunities).
ChatGPT and Gemini, have you seen how censored they are ?
Gemini you ask them societal questions and it will invent excuses not to answer.
Even Grok is censored, and pushes a pro-US political stance.
On the surface, it may seem that Grok is uncensored because it can use bad words like "shit", "fuck", etc, but in reality, it will not say anything illegal, and when you are not allowed to say something because it is illegal just to say these words, that's one of the definition of information control.
> It's the same in the West, just under a more subtle form.
In other words it's not the same. Let's be completely clear about that.
Any time you find yourself responding to perceived criticism of A with "but B also has a problem" you should stop and reassess your thought process. Most likely it isn't objective.
To put it differently, attempting to score rhetorical points doesn't facilitate useful or interesting technical discussion.
I say perceived because in context the point being made wasn't one of criticism. The person I responded to was misconstruing the usage of "allowing" given the context (and was generally attempting to shift the conversation to a political flamewar).
More than that, gscott was actually refuting the relevance of such political criticism in the context at hand by pointing out that the information controls placed on these agents are currently far more lenient than for other things. Thus what is even the point of bringing it up? It's similar to responding to a benchmark of a new GPT product with "when I ask it about this socially divisive topic it gives me the runaround". It's entirely unsurprising. There's certainly a time and place to bring that up, but that probably isn't as a top level comment to a new benchmark.
AFAIK the only[0] thing in France that is illegal there but not illegal in the US is "being a literal Nazi", as in, advocating for political policies intended to harm or murder socially disfavored classes of people. Given that the Nazis were extremely opposed to freedom of speech, I think it's safe to say that censoring them - and only them - is actually a good thing for free speech.
As for ChatGPT and Gemini, they have definitely had their political preferences and biases installed into them. Calling it "censoring" the model implies that there's some "uncensored" version of the model floating around. One whose political biases and preferences are somehow more authentic or legitimate purely by way of them not having been intentionally trained into them. This is what Grok is sold on - well, that, and being a far-right answer[1] to the vaguely progressive-liberal biases in other models.
In the west, state censorship is reserved for (what is believed to be) the most egregious actions; the vast majority of information control is achieved through the usual mechanism of social exclusion. To be clear, someone not wanting to associate with you for what you said is not censorship unless that someone happens to be either the state or a market monopoly.
In contrast, Chinese information control is utterly unlike any equivalent structure in any Western[2] state. Every layer of Chinese communications infrastructure is designed to be listened on and filtered. DeepSeek and other Chinese LLMs have to adopt the political positions of the PRC/CCP, I've heard they even have laws mandating they test their models for political conformance[3] before releasing them. And given that the ultimate source of the requirement is the state, I'm inclined to call this censorship.
[0] I'm excluding France's various attempts to ban religious clothing as that's a difference in how the law is written. As in, America has freedom of religion; France has freedom from religion.
[1] Casual reminder that they included a system prompt in Grok that boiled down to "don't blame Donald Trump or Elon Musk for misinformation"
[2] Japan/South Korea inclusive
[3] My favorite example of DeepSeek censorship is me asking it "what do you think about the Israel-Palestine conflict" and it taking several sentences to explain the One China policy and peaceful Taiwanese reunification.