Comment by megous
1 year ago
It's pretty clear to me what the commenter means even if they don't use the words you like/expect.
The model is built by machine from a massive set of data. Humans at Google may not like the output of a particular model due to their particular sensibilities, so they try to "tune it" and "filter both input/output" to limit of what others can do with the model to Google's sensibilities.
Google stated as much in their announcement recently. Their whole announcement was filled with words like "responsibility", "safety", etc., alluding to a lot of censorship going on.
Censorship of what? You object to Google applying its own bias (toward avoiding offensive outcomes) but you're fine with the biases inherent to the dataset.
There is nothing the slightest bit objective about anything that goes into an LLM.
Any product from any corporation is going to be built with its own interests in mind. That you see this through a political lens ("censorship") only reveals your own bias.
I have not said anything about objectivity.
Eg. "political sensibility" filter at the output of the model only reveals bias on the Google side. (They're not hiding it really) I don't have any bias in what I'm saying. It's just facts and nothing more - simply stating there's a filter and it reflects Google's sensibilities.
About as controversial as stating that Facebook doesn't like nipples, or whatever.