Comment by techpression
15 days ago
Isn’t that the whole point of publishing? This happened plenty before AI too, and the claims are easily verified by checking the claimed hallucinations. Don’t publish things that aren’t verified and you won’t have a problem, same as before but perhaps now it’s easier to verify, which is a good thing. We see this problem in many areas, last week it was a criminal case where a made up law was referenced, luckily the judge knew to call it out. We can’t just blindly trust things in this era, and calling it out is the only way to bring it up to the surface.
> Isn’t that the whole point of publishing?
No, obviously not. You're confusing a marketing post by people with a product to sell with an actual review of the work by the relevant community, or even review by interested laypeople.
This is a marketing post where they provide no evidence that any of these are hallucinations beyond their own AI tool telling them so - and how do we know it isn't hallucinating? Are there hallucinations in there? Almost certainly. Would the authors deserve being called out by people reviewing their work? Sure.
But what people don't deserve is an unrelated VC funded tech company jumping in and claiming all of their errors are LLM hallucinations when they have no actual proof, painting them all a certain way so they can sell their product.
> Don’t publish things that aren’t verified and you won’t have a problem
If we were holding this company to the same standard, this blog wouldn't be posted either. They have not and can not verify their claims - they can't even say that their claims are based on their own investigations.
Most research is funded by someone with a product to sell, not all but a frightening amount of it. VC to sell, VC to review. The burden of proof is always on the one publishing and it can be a very frustrating experience, but that is how it is, the one making the claim needs to defend themselves, from people (who can be a very big hit or miss) or machines alike. The good thing is that if this product is crap then it will quickly disappear.
That's still different from a bunch of researchers being specifically put in a negative light purely to sell a product. They weren't criticized so that they could do better, be it in their own error checking if it was a human-induced issue, or not relying on LLMs to do the work they should have been. They were put on blast to sell a product.
That's quite a bit different than a study being funded by someone with a product to sell.