Comment by throwaw12
2 days ago
So what?
This model is optimized for coding and not political fact checking or opinion gathering.
If you go that way, with same success you can prove bias in western models.
2 days ago
So what?
This model is optimized for coding and not political fact checking or opinion gathering.
If you go that way, with same success you can prove bias in western models.
> with same success you can prove bias in western models.
What are some examples? (curious, as a westerner)
Are there "bias" benchmarks? (I ask, rather than just search, because: bias)
This isn't a result of optimizing things one way or another
I didn't say it is "the result of optimizing for something else", I said model is optimized for coding, use it for coding and evaluate based on coding, why are you using it for political fact checking.
when do we stop this kind of polarization? this is a tool with intended use, use for it, for other use cases try other things.
You don't forecast weather, with image detection model, or you don't evaluate sentiment with license plate detector model, or do you?
> when do we stop this kind of polarization?
When the tool isn't polarized. I wouldn't use a wrench with an objectionable symbol on it.
> You don't forecast weather with image detection model
What do you do with a large language model? I think most people put language in and get language out. Plenty of people are going to look askance at statements like "the devil is really good at coding, so let's use him for that only". Do you think it should be illegal/not allowed to not hire a person because they have political beliefs you don't like?
Neither is the bias and censorship exhibited in models from Western labs. The point is that this evaluation is pointless. If it's mission critical for you to have that specific fact available to the model then there are multiple ways to augment or ablate this knowledge gap/refusal.