Comment by bitvoid
2 years ago
> The number of non-tech ppl I've heard directly reference ChatGPT now is absolutely shocking.
The problem is that a lot of those people will take ChatGPT output at face value. They are wholly unaware that of its inaccuracies or that it hallucinates. I've seen it too many times in the relatively short amount of time that ChatGPT has been around.
So what? People do this with Facebook news too. That's a people problem, not an LLM problem.
People on social media are absolutely 100% posting things deliberately to fuck with people. They are actively seeking to confuse people, cause chaos, divisiveness, and other ill intended purposes. Unless you're saying that the LLM developers are actively doing the same thing, I don't think comparing what people find on the socials vs getting back as a response from a chatBot is a logical comparison at all
There are far more people who post obviously wrong, confusing and dangerous things online with total conviction. There are people who seriously believe Earth is flat, for example.
How is that any different from what these AI chatbots are doing? They make stuff up that they predict will be rewarded highly by humans who look at it. This is exactly what leads to truisms like "rubber duckies are made of a material that floats over water" - which looks like it should be correct, even though it's wrong. It really is no different from Facebook memes that are devised to get a rise out of people and be widely shared.
3 replies →
If we rewind a little bit to the mid to late 2010s, filter bubbles, recommendation systems and unreliable news being spread on social media was a big problem. It was a simpler time, but we never really solved the problem. Point is, I don’t see the existence of other problems as an excuse for LLM hallucination, and writing it off as a “people problem” really undersells how hard it is to solve people problems.
Literally everything is a "people problem"
You can kill people with a fork, it doesn't mean you should legally be allowed to own a nuclear bomb "because it's just the same". The problem always come from scale and accessibility
So you're saying we need a Ministry of Truth to protect people from themselves? This is the same argument used to suppress "harmful" speech on any medium.
I've gotten to the point where I want "advertisment" stamped on anything that is, and I'm getting to the point I want "fiction" stamped on anything that is. I have no problem with fiction existing. It can be quite fun. People trying to pass fiction as fact is a problem though. Trying to force a "fact" stamp would be problematic though, so I'd rather label everything else.
How to enforce it is the real sticky wicket though, so it's only something best discussed at places like this or while sitting around chatting while consuming
And who gets to control the "fiction" stamp? Especially for hot button topics like covid (back in 2020)? Should asking an LLM about lab leak theory be auto-stamped with "fiction" since it's not proven? But then what if it's proven later?