Comment by aaronbaugher
2 days ago
A friend of mine asked an AI for a summary of a pending Supreme Court case. It came back with the decision, majority arguments, dissent, the whole deal. Only problem was that the case hadn't happened yet. It had made up the whole thing, and admitted that when called on it.
A human law clerk could make a mistake, like "Oh, I thought you said 'US v. Wilson,' not 'US v. Watson.'" But a human wouldn't just make up a case out of whole cloth, complete with pages of details.
So it seems to me that AI mistakes will be unlike the human mistakes that we're accustomed to and good at spotting from eons of practice. That may make them harder to catch.
I think it is more like the clerk would say "There never was a US vs Wilson" (well there probably was given how common that name is, but work with me). The AI doesn't have a concept of maybe I misunderstood the question. AI would likely give you a good summary if the case happened, but if it didn't it makes up a case.
Yes. That is precisely the problem with using LLMs. They wantonly make up text that has no basis in reality. That is the one and only problem I have with them.
It would be kind of funny if we build a space probe with an LLM and shoot it out into space. Many years later intelligent life from far away discovers it and it somehow leads to our demise do to badly hallucinated answers.
1 reply →