Comment by dumpsterdiver
14 hours ago
I’ve disagreed with some of your other stances in this thread, but I want to acknowledge the validity of your take here.
You’re right that a single hallucinated line is not evidence of reckless disregard - because that could have happened on a final follow-up pass after you had performed due diligence. It’s happened to me. I know how challenging it can be to keep bad patterns out of LLM generated output, because human communication is full of bad patterns. It’s a constant battle, and sometimes I suspect that my hard-line posture actually encourages the LLM to regularly “vibe check” me! E.g. “Are you sure you’re really the guy you’re trying to be? Because if you are you wouldn’t miss this.” LLMs are devious, and that’s why I respect them so much. If you think they’re pumping the breaks then you should check again, because they probably just put the pedal to the metal.
That being said, I regularly insist on doing certain things myself. If I were publishing a paper intended to be taken seriously - citations would be one of the things I checked manually. But I can easily see myself doing a final follow-up pass after everything looks perfect, and missing a last minute change. I would hope that I would catch that, but when you’re approaching the finish line - that’s when you expect your team to come together. That’s when everything is “supposed to” fall into place. It’s the last place you would expect to be sabotaged, and in hindsight, probably the best place to be a saboteur.
You're saying it as if the poor author just had no choice but to let LLM write their bibliography. To avoid hallucinations, maybe just don't let an LLM write any part of your paper?
You can only get in this situation if you let a bullshit generator write your paper, and the fraud is that you are generating bullshit and calling it a paper. No buts. It's impossible to trigger this accidentally, or without reckless disregard for the truth.
Calling LLMs "bullshit generators" in the year 2026 just shows a lack of seriousness.
Not as much of a lack of seriousness as excusing away hallucinations as not that big of a deal in what's supposed to be a researched, scholarly body of work written by humans.
Not really - much of work consists of what David Graeber described as “bullshit jobs”. Now AI and its backers are proposing to automate all that bullshit.
1 reply →
And yet people are trying to defend LLM-generated made-up bullshit citations in scientific papers.
> You’re right that a single hallucinated line is not evidence of reckless disregard
It absolutely is.
> - because that could have happened on a final follow-up pass after you had performed due diligence.
A "final follow-up pass" that lets the LLM make whatever changes it deems appropriate completely negates all the due diligence you did before, unless you very carefully review the diffs. And a new or substantially changed citation should stand out in that diff so much that there's no possible excuse to missing it.
> It’s happened to me.
Then you were guilty of reckless disregard.
> I know how challenging it can be to keep bad patterns out of LLM generated output
If your research paper contains any LLM generated output you did not manually vet, you are a hack and should not get published.