Comment by helterskelter
14 hours ago
You could at least filter out hallucinated references which simply don't exist pretty trivially, I'd imagine.
14 hours ago
You could at least filter out hallucinated references which simply don't exist pretty trivially, I'd imagine.
It's more than that. if there are mistakes, then you can also be flagged.
read the whole tweet:
If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).
If you'd read the whole series of tweets it's obvious that is not their intention and there needs to be "incontrovertible evidence that the authors did not check the results of LLM generation" for the penalty to apply.
It's not hard to divine their intentions: you are entirely responsible for what you summit and if it's clearly slop(py) you get a ban. In a reply they state that they are seeking to apply this rule fairly and accurately and are mindful of unintended effects.