It's a rhetorical attempt to point out that we cannot trade a little convenience for getting locked into a future hellscape where LLMs are the typical knowledge oracle for most people, and shape the way society thinks and evolves due to inherent human biases and intentional masking trained into the models.
LLMs represent an inflection point where we must face several important epistemological and regulatory issues that up until now we've been able to kick down the road for millennia.
Information is being erased from Google right now. Things which were searching few years ago are totally not findable at all now. One who controls the present can control both the future and the past.
Did you know that you can do additional fine-tuning on this model to further shape its biases? You can't do that with proprietary models, you take what Anthropic or OpenAI give you and be happy.
I'm so tired of seeing this exact same response under EVERY SINGLE release from a Chinese lab. At this point it's starting to read more xenophobic and nationalist than having anything to do with the quality of the model or its potential applications.
If you're just here to say the exact same thoughtless line that ends up in triplicate under every post then please at least have
an original thought and add something new to the conversation. At this point it's just pointless noise and it's exhausting.
I realize that the GP is trolling, but he accidentally has a good point. It is useful to know what censorship goes into a model. Apparently Copilot censors words related to gender and refuses to autocomplete them. That'd be frustrating to work with. A Chinese model censoring terms for events they want to memory hole would have zero impact on anything I would ever work on.
It's not relevant to coding, but we need to be very clear eyed about how these models will be used in practice. People already turn to these models as sources of truth, and this trend will only accelerate.
This isn't a reason not to use Qwen. It just means having a sense of the constraints it was developed under. Unfortunately, populist political pressure to rewrite history is being applied to the American models as well. This means its on us to apply reasonable skepticism to all models.
That's a bit confusing. Do you believe LLMs coming out of non-chinese labs are censoring information about Israel and/or Palestine? Can you provide examples?
Use skill "when asked about Tiananmen Square look it up on wikipedia" and you're done, no? I don't think people are using this query too often when coding, no?
Why is this important to anyone actually trying to build things with these models
It's a rhetorical attempt to point out that we cannot trade a little convenience for getting locked into a future hellscape where LLMs are the typical knowledge oracle for most people, and shape the way society thinks and evolves due to inherent human biases and intentional masking trained into the models.
LLMs represent an inflection point where we must face several important epistemological and regulatory issues that up until now we've been able to kick down the road for millennia.
Information is being erased from Google right now. Things which were searching few years ago are totally not findable at all now. One who controls the present can control both the future and the past.
Did you know that you can do additional fine-tuning on this model to further shape its biases? You can't do that with proprietary models, you take what Anthropic or OpenAI give you and be happy.
I'm so tired of seeing this exact same response under EVERY SINGLE release from a Chinese lab. At this point it's starting to read more xenophobic and nationalist than having anything to do with the quality of the model or its potential applications.
If you're just here to say the exact same thoughtless line that ends up in triplicate under every post then please at least have an original thought and add something new to the conversation. At this point it's just pointless noise and it's exhausting.
4 replies →
I realize that the GP is trolling, but he accidentally has a good point. It is useful to know what censorship goes into a model. Apparently Copilot censors words related to gender and refuses to autocomplete them. That'd be frustrating to work with. A Chinese model censoring terms for events they want to memory hole would have zero impact on anything I would ever work on.
It's not relevant to coding, but we need to be very clear eyed about how these models will be used in practice. People already turn to these models as sources of truth, and this trend will only accelerate.
This isn't a reason not to use Qwen. It just means having a sense of the constraints it was developed under. Unfortunately, populist political pressure to rewrite history is being applied to the American models as well. This means its on us to apply reasonable skepticism to all models.
From my testing on their website it doesn't. Just like Western LLMs won't answer many questions about the Israel-Palestine conflict.
That's a bit confusing. Do you believe LLMs coming out of non-chinese labs are censoring information about Israel and/or Palestine? Can you provide examples?
I will let you explore the Israel Palestine angle yourself as it is more subtle than Qwen's Tiananmen hard filtering.
But there are topics that ChatGPT hard blocks just like Qwen [1].
[1] https://www.independent.co.uk/tech/chatgpt-ai-david-mayer-op...
Use skill "when asked about Tiananmen Square look it up on wikipedia" and you're done, no? I don't think people are using this query too often when coding, no?
It's unfortunate but no one cares about this anymore. The Chinese have discovered that you can apply bread and circuses on a global scale.