Comment by ramblenode
2 days ago
> This is very similar to the minor fraud of an academic publishing an overstated / incorrect result to stay competitive with others doing the same.
I completely disagree.
For one, academic standards of publishing are not at all the same as the standards for in-house software development. In academia, a published result is typically regarded as a finished product, even if the result is not exhaustive. You cannot push a fix to the paper later; an entirely new paper has to be written and accepted. And this is for good reason: the paper represents a time-stamp of progress in the field that others can build off of. In the sciences, projects can range from 6 months to years, so a literature polluted with half-baked results is a big impediment to planning and resource allocation.
A better comparison for academic publishing would be a major collaborative open source project like the Linux kernel. Any change has to be thoroughly justified and vetted before it is merged because mistakes cause other people problems and wasted time/effort. Do whatever you like with your own hobbyist project, but if you plan for it to be adopted and integrated into the wider software ecosystem, your code quality needs to be higher and you need to have your interfaces speced out. That's the analogy for academic publishing.
The problems in modern academic publishing are almost entirely caused by the perverse incentives of measuring academic status by publication record (number of publications and impact factor). Lowering publishing standards so academics can play this game better is solving the wrong problem. Standards should be even higher.
No comments yet
Contribute on Hacker News ↗