Comment by small_scombrus
10 hours ago
I think they're saying that frontier LLMs may be usable to spot citations that are correct by shape (a real citation) but incorrect by usage (unrelated to the text)
I kind of hate the idea, but you probably could do a lazy LLM check of every paper and every citation and have it flag possible wrong (second sense) citations for human review
But you'd need a LOT of tokens and a LOT of human-hours
> have it flag possible wrong (second sense) citations for human review
And then what, we're done? How have we avoided the need for the same exhaustive human review? It only saves human review time if you trust the LLM not to miss things.
If the goal is to review every citation fully with 100% accuracy, then, sure, exhaustive human review is needed. But I suspect human review of a random sample would add value, catching some fraud, missing others, but having zero false positives (or as close to zero as human review can get).
An LLM could replace the random sampling. It doesn't need to be particularly good for the approach to provide value. I would worry about LLM bias though.
Another thing to consider is that readers can detect fake citations after publication, report to arXiv, and the author gets banned.