Comment by dylan604
2 years ago
> I'm unsure where this expectation of 100% absolute correctness comes from.
It's a computer. That's why. Change the concept slightly: would you use a calculator if you had to wonder if the answer was correct or maybe it just made it up? Most people feel the same way about any computer based anything. I personally feel these inaccuracies/hallucinations/whatevs are only allowing them to be one rung up from practical jokes. Like I honestly feel the devs are fucking with us.
Speech to text is often wrong too. So is autocorrect. And object detection. Computers don't have to be 100% correct in order to be useful, as long as we don't put too much faith in them.
Call me old fashioned, but I would absolutely like to see autocorrect turned off in many contexts. I much prefer to read messages with 30% more transparent errors rather than any increase in opaque errors. I can tell what someone meant if I see "elephent in the room", but not "element in the room" (not an actual example, autocorrect would likely get that one right).
Your caveat is not the norm though, as everyone is putting a lot of faith in them. So, that's part of the problem. I've talked with people that aren't developers, but they are otherwise smart individuals that have absolutely not considered that the info is not correct. The readers here are a bit too close to the subject, and sometimes I think it is easy to forget that the vast majority of the population do not truly understand what is happening.
Nah, I don’t think anything has the potential to build critical thinking like LLMs en masse. I only worry that they will get better. It’s when they are 99.9% correct we should worry.
People put too much faith in conspiracy theories they find on YT, TikTok, FB, Twitter, etc. What you're claiming is already not the norm. People already put too much faith into all kinds of things.
Okay, but search is done on a computer, and like the person you’re replying to said, we accept close enough.
I don’t necessarily disagree with your interpretation, but there’s a revealed preference thing going on.
The number of non-tech ppl I’ve heard directly reference ChatGPT now is absolutely shocking.
> The number of non-tech ppl I've heard directly reference ChatGPT now is absolutely shocking.
The problem is that a lot of those people will take ChatGPT output at face value. They are wholly unaware that of its inaccuracies or that it hallucinates. I've seen it too many times in the relatively short amount of time that ChatGPT has been around.
So what? People do this with Facebook news too. That's a people problem, not an LLM problem.
8 replies →
So you're saying we need a Ministry of Truth to protect people from themselves? This is the same argument used to suppress "harmful" speech on any medium.
2 replies →
why should all computing be deterministic?
let me show you this "genius"/"wrong-thinking" person as to say about AL(artificial life) and deterministic computing.
https://www.cs.unm.edu/~ackley/
https://www.youtube.com/user/DaveAckley
To sum up a bunch of their content: You can make intractable problems solvable/crunchable if you allow just a little error into the result (which is reduced the longer the calculation calculates). And this is acceptable for a number of use cases where initial accuracy is less important that instant feedback.
It is radically different from a Von Neumann model of a computer - where there is a deterministic 'totalitarian finger pointer' pointing to some registry (and only one registry at a time) is an inherently limited factor. In this model - each computational resource (a unit of ram, and a processing unit) fights for and coordinates reality with it's neighbors without any central coordination.
Really interesting stuff. still in its infancy...
"Computer says no" is not a meme for no reason.