← Back to context

Comment by mike_hearn

19 hours ago

Yeah I remember reading that article at the time. Agree they're in different categories. I think Gellman's summary wasn't really supportable. It's far too harsh - he's demanding an apology because the data set used for measuring test accuracy wasn't large enough to rule out the possibility that there were no COVID cases in the entire sample, and he doesn't personally think some explanations were clear enough. But this argument relies heavily on a worst case assumption about the FP rate of the test, one which is ruled out by prior evidence (we know there were indeed people infected with SARS-CoV-2 in that region in that time).

There's the other angle of selective outrage. The case for lockdowns was being promoted based on, amongst other things, the idea that PCR tests have a false positive rate of exactly zero, always, under all conditions. This belief is nonsense although I've encountered wet lab researchers who believe it - apparently this is how they are trained. In one case I argued with the researcher for a bit and discovered he didn't know what Ct threshold COVID labs were using; after I told him he went white and admitted that it was far too high, and that he hadn't known they were doing that.

Gellman's demands for an apology seem very different in this light. Ioannidis et al not only took test FP rates into account in their calculations but directly measured them to cross-check the manufacturer's claims. Nearly every other COVID paper I read simply assumed FPs don't exist at all, or used bizarre circular reasoning like "we know this test has an FP rate of zero because it detects every case perfectly when we define a case as a positive test result". I wrote about it at the time because this problem was so prevalent:

https://medium.com/mike-hearn/pseudo-epidemics-part-ii-61cb0...

I think Gellman realized after the fact that he was being over the top in his assessment because the article has been amended since with numerous "P.S." paragraphs which walk back some of his own rhetoric. He's not a bad writer but in this case I think the overwhelming peer pressure inside academia to conform to the public health narratives got to even him. If the cost of pointing out problems in your field is that every paper you write has to be considered perfect by every possible critic from that point on, it's just another way to stop people flagging problems.

Ioannidis corrected for false positives with a point estimate rather than the confidence interval. That's better than not correcting, but not defensible when that's the biggest source of statistical uncertainty in the whole calculation. Obviously true zero can be excluded by other information (people had already tested positive by PCR), but if we want p < 5% in any meaningful sense then his serosurvey provided no new information. I think it was still an interesting and publishable result, but the correct interpretation is something like Figure 1 from Gelman's

https://news.ycombinator.com/item?id=36714034

These test accuracies mattered a lot while trying to forecast the pandemic, but in retrospect one can simply look at the excess mortality, no tests required. So it's odd to still be arguing about that after all the overrun hospitals, morgues, etc.

  • By walked back, what I meant is his conclusion starts by demanding an apology, saying reading the paper was a waste of time and that Ioannidis "screwed up", that he didn't "look too carefully", that Stanford has "paid a price" for being associated with him, etc.

    But then in the P.P.P.S sections he's saying things like "I’m not saying that the claims in the above-linked paper are wrong." (then he has to repeat that twice because in fact that's exactly what it sounds like he's saying), and "When I wrote that the authors of the article owe us all an apology, I didn’t mean they owed us an apology for doing the study" but given he wrote extensively about how he would not have published the study, I think he did mean that.

    Also bear in mind there was a followup where Ioannidis's team went the extra mile to satisfy people like Gellman and:

    They added more tests of known samples. Before, their reported specificity was 399/401; now it’s 3308/3324. If you’re willing to treat these as independent samples with a common probability, then this is good evidence that the specificity is more than 99.2%. I can do the full Bayesian analysis to be sure, but, roughly, under the assumption of independent sampling, we can now say with confidence that the true infection rate was more than 0.5%.

    After taking into account the revised paper, which raised the standard from high to very high, there's not much of Gellman's critique left tbh. I would respect this kind of critique more if he had mentioned the garbage-tier quality of the rest of the literature. Ioannidis' standards were still much higher than everyone else's at that time.