Comment by pmyteh
4 years ago
I'm currently reviewing for JOSS, and have done so before. In many ways they're a very strange journal: the paper is nearly an afterthought, and the review is focused on the code. But I like them. As you say, the editors take their role seriously. And it seems to have two valuable contributions.
Firstly, encouraging and structuring code review in academia. My own code is almost entirely solo (and messy), so a venue for structured review and an incentive to robustify public code is good. Secondly, the culture in some disciplines is that code is not citable, only papers - and JOSS is an end-run around this. I hope this second situation is changing, but we're not there yet so JOSS has a valuable role for the moment in simply being a 'journal' assigning DOIs basically for code packages.
[Scholarly] Code review tools; criteria and implementations?
Does JOSS specify e.g. ReviewBoard, GitHub Pull Request reviews, or Gerrit for code reviews?
The reviews for JOSS happen on github[0] but the journal's not prescriptive about how you develop your package as long as the code is public. The criteria for the JOSS review are very clear[1].
I don't want to oversell the depth of the code review possible; not all of the reviewers will be fully expert in whatever tiny cutting-edge area the package is for (making correctness checks difficult beyond the test suite), and most of us are academics-who-code rather than research software engineers. But the fact it's happening at all is a great step forward.
[0]: https://github.com/openjournals/joss-reviews/issues [1]: https://joss.readthedocs.io/en/latest/review_criteria.html
Thanks for the citations. Looks like Wikipedia has "software review" and "software peer review":
https://en.wikipedia.org/wiki/Software_review
https://en.wikipedia.org/wiki/Software_peer_review
I'd add "Antipatterns" > "Software" https://en.wikipedia.org/wiki/Anti-pattern#Software_design
and "Code smells" > "Common code smells" https://en.wikipedia.org/wiki/Code_smell#Common_code_smells
and "Design smells" for advanced reviewers: https://en.wikipedia.org/wiki/Design_smell
and the CWE "Common Weakness Enumeration" numbers and thus URLs for Issues from the CWE Top 25 and beyond: https://cwe.mitre.org/top25/
FWIW, many or most scientists are not even trying to be software engineers: they just write slow code without reusing already-tested components and expect someone else to review Pull Requests after their PDF is considered impactful. They know enough coding to push the bar for their domain a bit higher each time.
Are there points for at least in-writing planing for the complete lifecycle and governance of an ongoing thesis defense of open source software for science; after we publish, what becomes of this code?