Comment by qntmfred
22 days ago
> You assume hundreds of millions of users could identify serious mistakes when they see them. But humans have demonstrated repeatedly that they can't.
same is true for humans whether they're interacting with LLMs or other humans. so I'm inclined to take statements like
> I don't think it can ever be understated how dangerous this is.
as hysteria
> > I don't think it can ever be understated how dangerous this is.
> as hysteria
It's not hysteria. Humans haven been trained, for better or worse, that computers are effectively infallible. Computed answers are going from near-100% correct to not-even-80% correct in an extremely short time.
It's not hysteria to say this is a dangerous combination.
> Humans haven been trained, for better or worse, that computers are effectively infallible
a TI-82, sure. I doubt many public opinion surveys would reveal that most people think computers generally or LLMs specifically are infallible. If you are aware of data suggesting otherwise I'd be interested to see it.
I remain much more concerned about the damage done by humans unquestioningly believing the things other humans say.
You should really try some of the LLMs made by companies in China. Then ask about a culturally sensitive topic the Chinese government would prefer you didn't discuss. Offline of course.
Humans are still responsible for the content that is ultimately feed and drives a model. If anything, LLMs are another opportunity to amplify a particular viewpoint that would fall under your concerns.
while some humans are confidently wrong occasionally, if this seems to be a pattern with someone we move them to a different role in the organization, we stop trusting them, or stop asking their opinion on that subject, or remove them from the org entirely.
far more often than being confidently wrong, the human will say, i’m not positive on the answer, let me double check and get back to you.
> i’m not positive on the answer, let me double check and get back to you.
absolutely fair. I expect that LLMs will continue to get better at doing the same.