Comment by jl6
1 day ago
It would be interesting for reproducibility efforts to assess “consequentiality” of failed replications, meaning: how much does it matter that a particular study wasn’t reproducible? Was it a niche study that nobody cited anyway, or was it a pivotal result that many other publications depended on, or anything in between those two extremes?
I would like to think that the truly important papers receive some sort of additional validation before people start to build lives and livelihoods on them, but I’ve also seen some pretty awful citation chains where an initial weak result gets overegged by downstream papers which drop mention of its limitations.
The issue is null results on these kinds of studies don’t actually mean much.
Here sample sizes were tiny, which introduced a vast amount of random noise. The fact so many studies where replicated suggests the vast majority of the underlying studies where valid not just the ones they could reproduce.
It is an ongoing crisis how much Alzheimer’s research was built on faked amyloid beta data. Potentially billions of dollars from public and private research which might have been spent elsewhere had a competing theory not been overshadowed by the initial fictitious results.
The amyloid hypothesis is still the top candidate for at least a form of Alzheimer's. But yes, the issues with one of the early studies has caused significant issues.
I say "a form of Alzheimer's" because it is likely we are labelling a few different diseases as Alzheimer's.
I went searching for more info on this and found https://www.science.org/content/blog-post/faked-beta-amyloid... which was an interesting read.
Those studies were all run and paid for, many/most with public funding. Of course it matters.
Reproducing a paper is Hard, and also Expensive. I'd expect that they wouldn't pick papers to try and reproduce at random.