Comment by BrenBarn

3 days ago

Elsevier is certainly evil, but I would say the root issue is the practices of the institutions where these "authors" are employed. This kind of thing is academic misconduct and should result in them losing their jobs.

This goes deeper than the institutions, actually. The KPI for many (non-industrial) researchers is the number of publications and citations. That's what careers and funding depends on.

Goodhart's law states "When a measure becomes a target, it ceases to be a good measure", and that's what we see here. There is a strong incentive to publish more instead of better. Ideas are spread into multiple papers, people push to be listed as authors, citations are fought for, and some become dishonest and start with citation cartels, "hidden" citations in papers (printed small in white-on-white, meaning it's indexed by citation crawlers but not visible to reviewers) and so forth.

This also destroys the peer review system upon which many venues depend. Peer reviews were never meant to catch cheaters. The huge number of low-to-medium quality papers in some fields (ML, CV) overworks reviewers, leading to things like CVPR forcing authors to be reviewers or face desk rejection. AI papers, AI reviews of dubious quality slice in even more.

Ultimately the only true fix for this is to remove the incentives. Funding and careers should no longer depend on the sheer number of papers and citations. The issue is that we have not really found anything better yet.

  • As for an alternative, how about using the social fabric of researchers and institutes instead? A few centuries of science ran on it before somebody had the great idea to introduce "objective" metrics which made things worse. Reintroducing that today would probably cause a larger spread in the quality of research, which is good: research is kind of a "hit-driven industry" - higher highs are the most important thing. The best researchers will do the best research, probably better without carrot and stick than with.

    • > As for an alternative, how about using the social fabric of researchers and institutes instead? A few centuries of science ran on it before somebody had the great idea to introduce "objective" metrics which made things worse.

      Oh boy, you seem to be missing the forest for the trees. When science was a hobby of the rich, there was no need to measure output. Only when "scientist" became a career and these scientists started demanding government funding (which only really crystallized in the 20th century), then we started needing a way to measure output.

      You could try doing away with an objective measure of academic output and replace it with the "social fabric of researchers and institutes" (whatever the fuck that means) instead , but then all you'd have is a good ol' boys club funded by taxpayer money.

      5 replies →

    • What the guarantee is that folks won't abuse this system in the same way they do the citation system? The recommendation letter system is often abused for the pettiest of reasons...

      1 reply →

    • This will be a hard argument to make.

      The decision makers who are the target audience for these metrics value "objective" data. They value the appearance of being quantitative, but lack the intellectual tools to distinguish between quantitative science and pseudoscience with numbers bolted on.

      That's modern bureaucracy in a nutshell.

    • A few centuries of science of white males. While I agree that the system with ”objective metrics” has a lot of problems, but just removing it would bring us back to the old days when almost all science was done by a few privileged white men.

      2 replies →

  • What you describe is still a problem with the institutions, because it is ultimately the institutions that provide the incentives (in the form of jobs). You're right that they're using bad metrics, but it is the institutions who are making those bad decisions based on the bad metrics.

    There are lots of better things, like people making hiring and firing decisions based on their evaluation of the content of papers they have actually read, instead of just a number. If someone is publishing so many papers that a hiring committee can't even read a meaningful fraction of them, that should be a red flag in itself, rather than a green one.

    • It's true that hire and tenure decisions are under the institution's control. But a lot of funding comes from external sources, and most public funding uses some sort of publication-based metric. There are exceptions, but that's the game. The CV of your PhD's is often judged by the publication list and the corresponding citations. That's research institutes where they might go, other universities, large companies etc. will look at this. It's difficult to change this system as isolated player, and coordinates efforts so far failed on the "what else" question.

      5 replies →

    • To dig even deeper into the problem: you have to get a large number of institutions to agree to stop this at once, none will voluntarily risk their (generally) working pipeline and system first. It disrupts a lot of different things and takes them out of the currently established model that everyone still uses to measure success. It reminds me of how most people who say “well not everyone should go to college!” Are obviously omitting “…except for my kids of course.” It borders on an expressive response, it’s not something anyone wants to actually take action on.

      There’s not a whole lot to gain for the individual or even the institution unless they hit an absolute home run on the first try that also shows positive results very quickly. More than likely the decision will be questioned at every turn

  • The incentive to disprove bad science ought to be greater.

    • And incorrect assumptions. As I understand it, "I did a study on this and it turns out there's no connection" generally results in the study not being published (if the study was testing for the validity of the connection)... which is sad, because that's still useful information to have.

      1 reply →

    • Exactly. There should be much greater incentives to (in)validate prior publications. That is what science is about.

Evil Seer would be a good anagram if only Elsevier did any of the actual [re]viewing themselves