← Back to context

Comment by mike_hearn

3 days ago

> If you believe in climate change and encounter a situation where a group of scientists were proven to have falsified data in a paper on climate change, it really isn't enough information to change your belief in climate change, because the evidence of climate change is much larger than any single paper.

Although your wider point is sound that specific example should undermine your belief quite significantly if you're a rational person.

1. It's a group of scientists and their work was reviewed, so they are probably all dishonest.

2. They did it because they expected it to work.

3. If they expected it to work it's likely that they did it before and got away with it, or saw others getting away with it, or both.

4. If there's a culture of people falsifying data and getting away with it, that means there's very likely to be more than one paper with falsified data. Possibly many such papers. After all, the authors have probably authored papers previously and those are all now in doubt too, even if fraud can't be trivially proven in every case.

5. Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.

6. Therefore, it's likely the evidence base is not as robust as previously believed.

7. Therefore, your belief in the likelihood of their claims being true should be lowered.

In reality how much you should update your belief will depend on things like how the fraud was discovered, whether there were any penalties, and whether the scientists showed contrition. If the fraud was discovered by people outside of the field, nothing happened to the miscreants and the scientists didn't care that they got caught, the amount you should update your belief should be much larger than if they were swiftly detected by robust systems, punished severely and showed genuine regret afterwards.

You're making a chain of assumptions and deductions that are not necessarily true given the initial statement of the scenario. Just because you think those things logically follow doesn't mean that they do.

You also make throw away assertions line "That's why so many claims are only found to not replicate years or decades after they were published." What is "so many claims?" The majority? 10%? 0.5%?

I totally agree with you that the nuances of the situation are very important to consider, and the things you mention are possibilities, but you are too eager to reject things if you think "that specific example should undermine your belief quite significantly if you're a rational person." You made lots of assumptions in these statements and I think a rational person with humility would not make those assumptions so quickly.

  • > What is "so many claims?" The majority? 10%? 0.5%?

    Wikipedia has a good intro to the topic. Some quick stats: "only 36% of the replications [in psychology] yielded significant findings", "Overall, 50% of the 28 findings failed to replicate despite massive sample sizes", "only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies", "A survey of cancer researchers found that half of them had been unable to reproduce a published result".

    The example is hypothetical and each step is probabilistic, so we can't say anything is necessarily true. But which parts of the reasoning do you think are wrong?

    • Oh so you're talking about replications in a very specific field, one completely different from the example you're using elsewhere of climate change.

      Your first step is "It's a group of scientists and their work was reviewed, so they are probably all dishonest."

      Even that is an unreasonable step. It is very possible for a single person to deceive their peers.

      Deductive reasoning like this works so much better for Sherlock Holmes, in fiction. In reality, deductive reasoning tends to re-enforce your biases and ignore the vast possibility space of alternatives.

      1 reply →

> It's a group of scientists and their work was reviewed, so they are probably all dishonest.

Peer review is a very basic check, more or less asking someone else in the field "Does this paper, as presented, make any sense?". It's often overvalued by people outside the field, but it's table stakes to the scientific conversation, not a seal of approval by the field as a whole.

>Scientists often take data found in papers at face value. That's why so many claims are only found to not replicate years or decades after they were published. Scientists also build on each other's data. Therefore, there are likely to not only be undetected fraudulent papers, but also many papers that aren't directly fraudulent but build on them without the problem being detected.

I think it's rare that scientists take things completely at face value. Even without fraud, it's easy for people to make mistakes and it's rare that everyone in a field actually agrees on all the details, so if someone is relying on a paper for something, they will generally examine things quite closely, talk to the original authors, and to whatever extent practical attempt to verify it themselves. The publishing process doesn't tend to reward this behavior, though, unfortunately (And also as a result, an external observer does not generally see the results of this: if someone concludes that a result is BS as a result of this process, they're much more likely to drop it than try to publish a rebuttal, unless it's something that is particularly important)

  • Sorry, what I meant was that the authors on a paper are supposed to be reviewing each other's contributions. They should all have access to the same data and understand what's going on. In practice, that doesn't always happen of course. But it should. Peer review where a journal just asks someone to read the final result is indeed a much weaker form of check.

    There's way too many cases of bogus papers being cited hundreds or thousands of times for me to believe scientists check papers they are building on. It probably depends on a lot on the field, though, this stuff always does.