Comment by inetknght
21 days ago
> ChatGPT alone has hundreds of millions of active users that are clearly getting value from it, despite any mistakes it may make.
You assume hundreds of millions of users could identify serious mistakes when they see them.
But humans have demonstrated repeatedly that they can't.
I don't think it can ever be understated how dangerous this is.
> I think Apple probably needs to shift to using cloud services more
You ignore lessons from the the recent spat between Apple and the UK.
> You assume hundreds of millions of users could identify serious mistakes when they see them. But humans have demonstrated repeatedly that they can't.
same is true for humans whether they're interacting with LLMs or other humans. so I'm inclined to take statements like
> I don't think it can ever be understated how dangerous this is.
as hysteria
> > I don't think it can ever be understated how dangerous this is.
> as hysteria
It's not hysteria. Humans haven been trained, for better or worse, that computers are effectively infallible. Computed answers are going from near-100% correct to not-even-80% correct in an extremely short time.
It's not hysteria to say this is a dangerous combination.
> Humans haven been trained, for better or worse, that computers are effectively infallible
a TI-82, sure. I doubt many public opinion surveys would reveal that most people think computers generally or LLMs specifically are infallible. If you are aware of data suggesting otherwise I'd be interested to see it.
I remain much more concerned about the damage done by humans unquestioningly believing the things other humans say.
1 reply →
while some humans are confidently wrong occasionally, if this seems to be a pattern with someone we move them to a different role in the organization, we stop trusting them, or stop asking their opinion on that subject, or remove them from the org entirely.
far more often than being confidently wrong, the human will say, i’m not positive on the answer, let me double check and get back to you.
> i’m not positive on the answer, let me double check and get back to you.
absolutely fair. I expect that LLMs will continue to get better at doing the same.