Comment by bloppe
3 hours ago
I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.
Welcome to new world of fake stuff i guess
If the penalty for a crime is a fine, then that law exists only for the lower class
In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.
That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).
Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.
No different from applying to jobs. Much like companies, there are a variety of journals with varying levels of prestige or that fit your paper better/worse. You don't know in advance which journals will respond to your paper, which ones just received submissions similar to yours, etc.
Plus, the t in me from submission to acceptance/rejection can be long. For cutting edge science, you can't really afford to wait to hear back before applying to another journal.
All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.
It's standard practice, nothing suspect about their approach - and you won't go lower and lower and lower still because at some point you'll be tired of re-formatting, or a doctoral candidate's funding will be used up, or the topic has "expired" (= is overtaken by reality/competition).
This is effectively standard across the board.
Pay to publish journals already exist.
This is sorta the opposite of pay to publish. It's pay to be rejected.
I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).
I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.
> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
Sure, but now we can't even assume that such research is submitted in good faith anymore. There just seems to be no perfect solution.
Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:
- the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work
There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.
Incidentally, I think this may be a rare case where a blockchain makes some sense?
You have some good ideas there, it's all about incentives and about public reputation.
But it should also fair. I once caught a team at a small Indian branch of a very large three letter US corporation violating the "no double submission" rule of two conferences: they submitted the same paper to two conferences, both naturally landed in my reviewer inbox, for a topic I am one of the experts in.
But all the other employees should not be penalized by the violations of 3 researchers.
This idea looks very similar to journals! Each journal has a reputation, if they publish too much crap, the crap is not cited and the impact factors decrease. Also, they have an informal reputation, because impact index also has problems.
Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...
How would this work for independent researchers?
(no snark)
Pay to review is common in Econ and Finance.
Variation I thought of on pay-to-review:
Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.
Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.
Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...
Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.
Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.