Here's a work from last year which was plagiarized. The rare thing about this work is it was submitted to ICLR, which opened reviews for both rejected and accepted works.
You'll notice you can click on author names and you'll get links to their various scholar pages but notably DBLP, which makes it easy to see how frequently authors publish with other specific authors.
Some of those authors have very high citation counts... in the thousands, with 3 having over 5k each (one with over 18k).
Not in most fields, unless misconduct is evident. (And what constitutes "misconduct" is cultural: if you have enough influence in a community, you can exert that influence on exactly where that definitional border lies.) Being wrong is not, and should not be, a career-ending move.
If we are aiming for quality, then being wrong absolutely should be. I would argue that is how it works in real life anyway. What we quibble over is what is the appropriate cutoff.
There's a big gulf between being wrong because you or a collaborator missed an uncontrolled confounding factor and falsifying or altering results. Science accepts that people sometimes make mistakes in their work because a) they can also be expected to miss something eventually and b) a lot of work is done by people in training in labs you're not directly in control of (collaborators). They already aim for quality and if you're consistently shown to be sloppy or incorrect when people try to use your work in their own.
The final bit is a thing I think most people miss when they think about replication. A lot of papers don't get replicated directly but their measurements do when other researchers try to use that data to perform their own experiments, at least in the more physical sciences this gets tougher the more human centric the research is. You can't fake or be wrong for long when you're writing papers about the properties of compounds and molecules. Someone is going to come try to base some new idea off your data and find out you're wrong when their experiment doesn't work. (or spend months trying to figure out what's wrong and finally double check the original data).
One, it doesnt damage your reputation as much as one would think.
But two, and more importantly, no one is checking.
Tree falls in the forest, no one hears, yadi-yada.
Here's a work from last year which was plagiarized. The rare thing about this work is it was submitted to ICLR, which opened reviews for both rejected and accepted works.
You'll notice you can click on author names and you'll get links to their various scholar pages but notably DBLP, which makes it easy to see how frequently authors publish with other specific authors.
Some of those authors have very high citation counts... in the thousands, with 3 having over 5k each (one with over 18k).
https://openreview.net/forum?id=cIKQp84vqN
<< no one is checking.
I think this is the big part of it. There is no incentive to do it even when the study can be reproduced.
The vast majority of papers is so insignifcant, nobody bothers to try and use and thereby replicate it.
But the thing is… nobody is doing the replication to falsify it. And if the did, it wouldn’t be published because it’s a null result
Not in most fields, unless misconduct is evident. (And what constitutes "misconduct" is cultural: if you have enough influence in a community, you can exert that influence on exactly where that definitional border lies.) Being wrong is not, and should not be, a career-ending move.
If we are aiming for quality, then being wrong absolutely should be. I would argue that is how it works in real life anyway. What we quibble over is what is the appropriate cutoff.
There's a big gulf between being wrong because you or a collaborator missed an uncontrolled confounding factor and falsifying or altering results. Science accepts that people sometimes make mistakes in their work because a) they can also be expected to miss something eventually and b) a lot of work is done by people in training in labs you're not directly in control of (collaborators). They already aim for quality and if you're consistently shown to be sloppy or incorrect when people try to use your work in their own.
The final bit is a thing I think most people miss when they think about replication. A lot of papers don't get replicated directly but their measurements do when other researchers try to use that data to perform their own experiments, at least in the more physical sciences this gets tougher the more human centric the research is. You can't fake or be wrong for long when you're writing papers about the properties of compounds and molecules. Someone is going to come try to base some new idea off your data and find out you're wrong when their experiment doesn't work. (or spend months trying to figure out what's wrong and finally double check the original data).
3 replies →
Not really, since nobody (for values of) ends up actually falsifying it, and if they do, it's years down the line.