← Back to context

Comment by renewiltord

15 hours ago

Family member tried to do work relying on previous results from a biotech lab. Couldn’t do it. Tried to reproduce. Doesn’t work. Checked work carefully. Faked. Switched labs and research subject. Risky career move, but. Now has a career. Old lab is in mental black box. Never to be touched again.

Talked about it years ago https://news.ycombinator.com/item?id=26125867

Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.

I haven't identified an outright fake one but in my experience (mainly in sensor development) most papers are at the very least optimistic or are glossing over some major limitations in the approach. They should be treated as a source of ideas to try instead of counted on.

I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)

  • When I was a postdoc, I wrote up the results from a paper based on theories from my advisor. The paper wasn't very good- all the results were bad. Overnight, my advisor rewrote all the results of the paper, partly juicing the results, and partly obscuring the problems, all while glossing over the limitations. She then submitted it to a (very low prestige) journal.

    I read the submitted version and told her it wasn't OK. She withdrew the paper and I left her lab shortly after. I simply could not stand the tendency to juice up papers, and I didn't want to have my reputation tainted by a paper that was false (I'm OK with my reputation being tainted by a paper that was just not very good).

    What really bothers me is when authors intentionally leave out details of their method. There was a hot paper (this was ~20 years ago) about a computational biology technique ("evolutionary trace") and when we did the journal club, we tried to reproduce their results- which started with writing an implementation from their description. About half way through, we realized that the paper left out several key steps, and we were able to infer roughly what they did, but as far as we could tell, it was an intentional omission made to keep the competition from catching up quickly.

For original research, a researcher is supposed to replicate studies that form the building blocks of their research. For example, if a drug is reported to increase expression of some mRNA in a cell, and your research derives from that, you will start by replicating that step, but it will just be a note in your introduction and not published as a finding on its own.

When a junior researcher, e.g. a grad student, fails to replicate a study, they assume it's technique. If they can't get it after many tries, they just move on, and try some other research approach. If they claim it's because the original study is flawed, people will just assume they don't have the skills to replicate it.

One of the problems is that science doesn't have great collaborative infrastructure. The only way to learn that nobody can reproduce a finding is to go to conferences and have informal chats with people about the paper. Or maybe if you're lucky there's an email list for people in your field where they routinely troubleshoot each other's technique. But most of the time there's just not enough time to waste chasing these things down.

I can't speak to whether people get blackballed. There's a lot of strong personalities in science, but mostly people are direct and efficient. You can ask pretty pointed questions in a session and get pretty direct answers. But accusing someone of fraud is a serious accusation and you probably don't want to get a reputation for being an accuser, FWIW.

I've read of a few cases like this on Hacker News. There's often that assumption, sometimes unstated: if a junior scientist discovers clear evidence of academic misconduct by a senior scientist, it would be career suicide for the junior scientist to make their discovery public.

The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.

  • It's not particular to psychology, the modern discussion of it just happened to start there. It affects all fields and is more like a validity crisis than a replication crisis.

    https://blog.plan99.net/replication-studies-cant-fix-science...

    • He’s not saying it’s Psychology the field. He’s saying replication crisis may be because junior scientist (most often involved in replication) is afraid of retribution: it’s psychological reason for fraud persistence.

      I think perhaps blackball is guaranteed. No one likes a snitch. “We’re all just here to do work and get paid. He’s just doing what they make us do”. Scientist is just job. Most people are just “I put thing in tube. Make money by telling government about tube thing. No need to be religious about Science”.

      1 reply →