← Back to context

Comment by crystal_revenge

2 days ago

> A lot of people also don't know that many of the well known papers are just variations on small time papers with a fuck ton more compute thrown at the problem.

I worked for a small research heavy AI startup for a bit and it was heart breaking how many people I would interact with in that general space with research they worked hard and passionately on only to have been beaten to the punch by a famous lab that could rush the paper out quicker and at a larger scale.

There were also more than a few instances of high-probability plagiarism. My team had a paper that had been existing for years basically re-written without citation by a major lab. After some complaining they added a footnote. But it doesn't really matter because no big lab is going to have to defend themselves publicly against some small startup, and their job at the big labs is to churn out papers.

  > only to have been beaten to the punch by a famous lab that could rush the paper out quicker and at a larger scale.

This added at least a year to my PhD... Reviewers kept rejecting my works saying "add more datasets" and such comments. That's nice and all, but on the few datasets I did use I beat out top labs and used a tenth of the compute. I'd love to add more datasets but even though I only used a tenth of the compute I blew my entire compute budget. Guess state of the art results, a smaller model, higher throughput, and 3rd party validation were not enough (use an unpopular model architecture).

I always felt like my works were being evaluated as engineering products, not as research.

  > a few instances of high-probability plagiarism

I was reviewing a work once and I actually couldn't tell if the researchers knew that they ripped me off or not. They compared to my method, citing, and showing figures using it. But then dropped the performance metrics from the table. So I asked. I got them in return and saw that there was no difference... So I dove in and worked out that they were just doing 99% my method with additional complexity (computational overhead). I was pretty upset.

I was also upset because otherwise the paper was good. The results were nice and they even tested our work in a domain we hadn't. Were they just upfront I would have gladly accepted the work. Though I'm pretty confident the other reviewers wouldn't have due to "lack of novelty."

It's a really weird system that we've constructed. We're our own worst enemies.

  > their job at the big labs is to churn out papers.

I'd modify this slightly. Their job is to get citations. Churning out papers really helps with that, but so does all the tweeting and evangelizing of their works. It's an unfortunate truth that as researchers we have to sell our works, and not just by the scientific merit that they hold. People have to read them after all. But we should also note that it is easier for some groups to get noticed more than others. Prestige doesn't make a paper good, but it sure acts as a multiplying factor for all the metrics we use for determining if it is good.