Comment by D-Machine
11 hours ago
> A peer reviewer reads a paper and make comments on it. That's it! They don't check primary data, they don't investigate methods, they don't interrogate scientists, they don't re-run experiments just to double check. They assist a journal's editors in editing--that's it.
Um, what? I have done all these things in reviews, and know other academics that have done these things as well. More confusingly though, if you are saying most reviewers don't do these things (which I agree with), this would only strengthen my point?
I'll let readers decide if it is my comments that exacerbate the problem, or if, perhaps, it is apologism for journalistic peer review that might be causing bigger issues in the present day.
Would be interesting if you would be willing to share a paper you reviewed and detail your review process of it. I don't see how one could check primary data or interrogate scientists in a blind review process, for example.
This is IMO just bad faith sealioning, you can look at the whole replication crisis in psychology and social science (esp. the work of people like Nick Brown and the GRIM test, or Uri Simonsohn), or sites like Retraction Watch, and see clear evidence of everything I am saying. There are endless papers in ML research going into issues with test datasets and data duplication, etc. In plenty of cases all data and code is made open, so it is trivial to check data issues and methods.
Also, review is back and forth, and has rounds: you almost always interrogate the scientists of the paper you are reviewing, this almost like the definition of peer review. I don't think you have any idea of what you are talking about at all.
EDIT: Heck, just hop on over to https://openreview.net/ and take a look at the whole review process for some random paper (e.g. https://openreview.net/forum?id=cp5PvcI6w8_)