Comment by 2ndorderthought
1 day ago
People have shown censorship and change of tone with questions related to Israel in US chat bots.
For the record, none of this bothers me. Will I ever discuss with an LLM Tianeman square? Nope. How about Israel? Nope.
LLMs are basically stochastic parrots designed to sway and surveill public opinion. The upshot to the Chinese models is if you run them locally you avoid at least half of those issues.
First they came for people asking about Tiananmen Square
And I did not speak out
Because I was not asking about Tiananmen Square
Then they came for people asking about Israel
And I did not speak out
Because I was not asking about Israel
This made me chuckle.
I didn't mean to dismiss ethical accountability for LLM training corpuses. It is a shame.
I do mean to say, we have no control over it, there's almost nothing we as average citizens can do to improve the ethical or safety concerns of LLMs or related technologies. Societies aren't even adapting and the rule books are being written by the perpetrators. Might as well get out of it what we can while we can.
Wonder if stuff like this would affect it?
https://github.com/p-e-w/heretic
Guessing it probably would?
1 reply →