Comment by qntmfred
21 days ago
> Humans haven been trained, for better or worse, that computers are effectively infallible
a TI-82, sure. I doubt many public opinion surveys would reveal that most people think computers generally or LLMs specifically are infallible. If you are aware of data suggesting otherwise I'd be interested to see it.
I remain much more concerned about the damage done by humans unquestioningly believing the things other humans say.
You should really try some of the LLMs made by companies in China. Then ask about a culturally sensitive topic the Chinese government would prefer you didn't discuss. Offline of course.
Humans are still responsible for the content that is ultimately feed and drives a model. If anything, LLMs are another opportunity to amplify a particular viewpoint that would fall under your concerns.