Comment by davidguetta
1 day ago
Yeah even the entire "Jane Doe / Jame Smith" my first thought is that it could have been a latex default value
There was dumb stuff like this before the GPT era, it's far from convincing
1 day ago
Yeah even the entire "Jane Doe / Jame Smith" my first thought is that it could have been a latex default value
There was dumb stuff like this before the GPT era, it's far from convincing
> Between 2020 and 2025, submissions to NeurIPS increased more than 220% from 9,467 to 21,575. In response, organizers have had to recruit ever greater numbers of reviewers, resulting in issues of oversight, expertise alignment, negligence, and even fraud.
I don’t think the point being made is “errors didn’t happen pre-GPT”, rather the tasks of detecting errors have become increasingly difficult because of the associated effects of GPT.
> rather the tasks of detecting errors have become increasingly difficult because of the associated effects of GPT.
Did the increase to submissions to NeurIPS from 2020 to 2025 happen because ChatGPT came out in November of 2022? Or was AI getting hotter and hotter during this period, thereby naturally increasing submissions to ... an AI conference?
I was an area chair on the NeurIPS program committee in 1997. I just looked and it seems that we had 1280 submissions. At that time, we were ultimately capped by the book size that MIT Press was willing to put out - 150 8-page articles. Back in 1997 we were all pretty sure we were on to something big.
I'm sure people made mistakes on their bibliographies at that time as well!
And did we all really dig up and read Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953)?
Edited to add: Someone made a chart! Here: https://papercopilot.com/statistics/neurips-statistics/
You can see the big bump after the book-length restriction was lifted, and the exponential rise starting ~2016.
4 replies →
I guess the way one would verify that this is more general trend in academia would be to run this on accepted papers to a non-AI conference?
There are people who just want to punish academics for the sake of punishing academics. Look at all the people downthread salivating over blacklisting or even criminally charging people who make errors like this with felony fraud. Its the perfect brew of anti AI and anti academia sentiment.
Also, in my field (economics), by far the biggest source of finding old papers invalid (or less valid, most papers state multiple results) is good old fashioned coding bugs. I'd like to see the software engineers on this site say with a straight face that writing bugs should lead to jail time.
And research codebases (in AI and otherwise) are usually of extremely bad quality. It's usually a bunch of extremely poorly-written scripts, with no indication which order to run them in, how inputs and outputs should flow between them, and which specific files the scripts were run on to calculate the statistics presented in the paper.
> I'd like to see the software engineers on this site say with a straight face that writing bugs should lead to jail time.
My hand is up.
I do not believe in gaol, but I do agree with the sentiment.
Let he who is without sin cast the first stone…
4 replies →
Still a citation to a work you clearly have not read...