← Back to context

Comment by lores

6 days ago

I've worked in academic publishing for a long while, and I can tell you from experience that:

- "quantity of publications" is a problem and directly leads to bad science, so is on aggregate a measure of anti-quality

- "quality of the journals published in" is all in the mind; prestigious journals with high impact factor have been repeatedly found not to have the best research. The rigour of the editing process is more important, but few researchers know that, and importantly they are heavily incentivised by funders to go for high impact factor, completely muddying the waters of who's a good researcher by that metric.

- number of citations would be a better measure, but unfortunately is directly linked to impact factor, in practice and in perception.

- awards won, books published - too niche and random to matter much.

- "every academic could tell you the top 5 journals in their field" haha, no, you'd be as surprised as I was when doing that research.

Academic publishers have been considering the measuring problem for decades, and no one has found a solution yet.

There is no good measure of the quality of a paper until many years after publication. It's easy to identify some true positives (high impact, no retraction), it's quasi-impossible by definition to identify false negatives (unfairly ignored papers), and most importantly this emphasis on prestige research is terribly harmful to Science. Science needs researchers who are happy to replicate studies, people who publish disappointing results, and people who study otherwise unglamorous topics, otherwise Science fails.

TLDR: measuring how 'good' a researcher is by their prestige is extremely destructive to Science. You can't do that.

I'm not saying it's only prestige, but to a first approximation, a researcher who has an article published in Nature is highly likely to be better than one who has only published in no-name garbage journal that publishes whatever they are sent. Of course, nothing is certain, but we're talking about probabilities here.

  • And, as I'm saying, prestige, or probable future prestige, isn't a good proxy for a researcher's value or future value, even if it could be fairly guessed, which it can't. Nature is exhibit A, B and C, as it's the most prestigious journal, but not the most rigorous in any field, and its very existence damages Science by overvaluing the research it publishes, reducing the impact of better journals and the research they publish, and wasting the time, quality of life, and quality of research of scientists who feel like they must do anything they have to to publish in it, or are pressured by funders and/or academic institutions to do so.

  • But you are talking certainty when you claim DEI hires means that it's possible the lesser person is hired. If you have no objective system to measure merit then it's possible to ever know this

    • How does anyone know anything? Why even vet candidates at all? Let’s just assign professorships completely randomly then. We’ll have high school dropouts who can’t explain the quadratic formula teach differential equations at Harvard.

      I’m sure they would do just as good of a job. Because nobody could ever possibly objectively tell whether someone with a PhD in math is going to be better at teaching and researching math than a high school dropout, right?