Comment by gortok
7 days ago
I think folks have taken the wrong lesson from this.
It’s not that they added a new feature because there was demand.
They added a new feature because technology hallucinated a feature that didn’t exist.
The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.
That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again
This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.
The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.
Nearly everyone in AI research is working on this problem, directly or indirectly.
> which are both completely irrational
Really!?
[0] https://i.imgur.com/ly5yk9h.png
Your screenshot is conveniently omitting the disclaimer below: "AI responses may include mistakes. Learn more[1]"
[1]: https://support.google.com/websearch/answer/14901683
3 replies →
No one is “requiring perfection”, but hallucination is a major issue and is in the opposite direction of the “goal” of AGI.
If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.
> No one is “requiring perfection”
> If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.
Those sentences aren't compatible.
> but hallucination is a major issue
Again, every official public AI interface has warnings/disclaimers for this issue. It's well known. It's not some secret. Every AI researcher is directly or indirectly working on this.
> is in the opposite direction of the “goal” of AGI
This isn't a logical statement, so it's difficult to respond to. Hallucination isn't a direction that's being headed towards, it's being actively, with intent and $$$, headed away from.
3 replies →
> Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists
What?? What does AGI have to do with this? (If this was some kind of hyperbolic joke, sorry, i didn't get it.)
But, more importantly, the GP only said that in a sane world, the ChatGPT creators should be the ones trying to fix this mistake on ChatGPT. After all, it's obviously a mistake on ChatGPT's part, right?
That was the main point of the GP post. It was not about "requiring perfection" or something like that. So please let's not attack a straw man.
> What does AGI have to do with this?
Their requirement is no hallucinations [1], also stated as "be sure it didn't happen again" in the original comment. If you define a hallucination as something that wasn't in the training data, directly or indirectly (indirectly being something like an "obvious" abstract concept), then you've placed a profound constraint on the system, requiring determinism. That requirement fundamentally, by the non-deterministic statistics that these run on, means you cannot use an LLM, as they exist today. They're not "truth" machines - use a database instead.
Saying "I don't know", with determinism is only slightly different than saying "I know" with determinism, since it requires being fully aware of what you do know, not at a fact level, but at a conceptual/abstract level. Once you have a system that fully reasons about concepts, is self aware of its own knowledge, and can find the fundamental "truth" to answer a question with determinism, you have something indistinguishable from AGI.
Of course, there's a terrible hell that lives between those two, in the form of: "Error: Question outside of known questions." I think a better alternative to this hell would be a breakthrough that allowed "confidence" to be quantified. So, accept that hallucinations will exist, but present uncertainty to the user.
[1] https://news.ycombinator.com/item?id=44496098
5 replies →
There was demand for the problem. ChatGPT created demand for this solution.
Sometimes you just have to deal with the world as it is, not how you think it should be.
Is it your argument that the folks that make generative AI applications have nothing to improve from this example?
If I were someone who ran ChatGPT then yeah, I'd have something to improve. But most of us aren't, and can't do anything about it - so might as well make lemons into lemonade instead of getting hung up on the obvious and unchangeable fact that our future is filled with AI garbage.
Sure, but this isn't new. LLMs have been doing this for ages. That's why people aren't talking about it as much.
[dead]
You sound like all the naysayers when Wikipedia was new. Did you know anybody can go onto Wikipedia and edit a page to add a lie‽ How can you possibly trust what you read on there‽ Do you think Wikipedia should issue groveling apologies every time it happens?
Meanwhile, sensible people have concluded that, even though it isn’t perfect, Wikipedia is still very, very useful – despite the possibility of being misled occasionally.
> despite the possibility of being misled occasionally.
There is a chasm of difference between being misled occasionally (Wikipedia) and frequently (LLMs). I don’t think you understand how much effort goes on behind the scenes at Wikipedia. No, not everyone can edit every Wikipedia page willy-nilly. Pages for major political figures often can only be edited with an account. IPs like those of iCloud Private Relay are banned and can’t anonymously edit the most basic of pages.
Furthermore, Wikipedia was always honest about what it is from the start. They managed expectations, underpromised and overdelivered. The bozos releasing LLMs talk about them as if they created the embryo of god, and giving money to their religion will solve all your problems.
20 years ago though, I think our teachers had the right idea when they said Wikipedia wasn't a reliable source and couldn't be counted. It's much better these days but I checked an old revision (the article on 9/11) the other day and barely anything was sourced, there were parts written in first person, lots of emotive language.
> I don’t think you understand how much effort goes on behind the scenes at Wikipedia.
I understand Wikipedia puts effort in, but it’s irrelevant. As a user, you can never be sure that what you are reading on Wikipedia is the truth. There are good reasons to assume that certain topics are more safe and certain topics are less safe, but there are no guarantees. The same is true of AI.
> Wikipedia was always honest about what it is from the start.
Every mainstream AI chatbot includes wording like “ChatGPT can make mistakes. Check important info.”
OK, so how do I edit ChatGPT so it stops lying then?
You have ignored my point.
A technology can be extremely useful despite not being perfect. Failure cases can be taken into consideration rationally without turning it into a moral panic.
You have no ability to edit Wikipedia to stop it from lying. Somebody can come along and re-add the lie a millisecond later.
4 replies →