Comment by rcxdude
18 hours ago
I haven't identified an outright fake one but in my experience (mainly in sensor development) most papers are at the very least optimistic or are glossing over some major limitations in the approach. They should be treated as a source of ideas to try instead of counted on.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)
When I was a postdoc, I wrote up the results from a paper based on theories from my advisor. The paper wasn't very good- all the results were bad. Overnight, my advisor rewrote all the results of the paper, partly juicing the results, and partly obscuring the problems, all while glossing over the limitations. She then submitted it to a (very low prestige) journal.
I read the submitted version and told her it wasn't OK. She withdrew the paper and I left her lab shortly after. I simply could not stand the tendency to juice up papers, and I didn't want to have my reputation tainted by a paper that was false (I'm OK with my reputation being tainted by a paper that was just not very good).
What really bothers me is when authors intentionally leave out details of their method. There was a hot paper (this was ~20 years ago) about a computational biology technique ("evolutionary trace") and when we did the journal club, we tried to reproduce their results- which started with writing an implementation from their description. About half way through, we realized that the paper left out several key steps, and we were able to infer roughly what they did, but as far as we could tell, it was an intentional omission made to keep the competition from catching up quickly.