Comment by smallmancontrov
2 days ago
When we criticize without proposing a fix or alternative, we promote the implicit alternative of tearing something down without fixing it. This is often much worse than letting the imperfect thing stand. So here's a proposal: do what we do in software.
No, really: we have the same problem in software. Software developers under high pressure to move tickets will often resort to the minor fraud of converting unfinished features into bugs by marking them complete when they are not in fact complete. This is very similar to the minor fraud of an academic publishing an overstated / incorrect result to stay competitive with others doing the same. Often it's more efficient in both cases to just ignore the problem, which will generally self-correct with time. If not, we have to think about intervention -- but in software this story has played out a thousand times in a thousand organizations, so we know what intervention looks like.
Acceptance testing. That's the solution. Nobody likes it. Companies don't like to pay for the extra workers and developers don't like the added bureaucracy. But it works. Maybe it's time for some fraction of grant money to go to replication, and for replication to play a bigger role in gating the prestige indicators.
> This is very similar to the minor fraud of an academic publishing an overstated / incorrect result to stay competitive with others doing the same.
I completely disagree.
For one, academic standards of publishing are not at all the same as the standards for in-house software development. In academia, a published result is typically regarded as a finished product, even if the result is not exhaustive. You cannot push a fix to the paper later; an entirely new paper has to be written and accepted. And this is for good reason: the paper represents a time-stamp of progress in the field that others can build off of. In the sciences, projects can range from 6 months to years, so a literature polluted with half-baked results is a big impediment to planning and resource allocation.
A better comparison for academic publishing would be a major collaborative open source project like the Linux kernel. Any change has to be thoroughly justified and vetted before it is merged because mistakes cause other people problems and wasted time/effort. Do whatever you like with your own hobbyist project, but if you plan for it to be adopted and integrated into the wider software ecosystem, your code quality needs to be higher and you need to have your interfaces speced out. That's the analogy for academic publishing.
The problems in modern academic publishing are almost entirely caused by the perverse incentives of measuring academic status by publication record (number of publications and impact factor). Lowering publishing standards so academics can play this game better is solving the wrong problem. Standards should be even higher.
Yeah, the alternative to a double-blind review that isn't a double-blind review is a double blind review.
The alternative to not enforcing existing rules against plagiarism is to enforce them.
The alternative to ignoring integrity issues i.e."minor fraud" in the workplace is to apply ordinary workplace discipline on them.