Comment by syntacticsalt
6 months ago
Reporting effect size mitigates this problem. If observed effect size is too small, its statistical significance isn't viewed as meaningful.
6 months ago
Reporting effect size mitigates this problem. If observed effect size is too small, its statistical significance isn't viewed as meaningful.
Sure (and of course). But did you see the effect size histogram in the OP?
Are you referring to the first figure, from Smith, et al, 2007? If so, I couldn't evaluate whether gwern's claim makes sense without reading that paper to get an idea of, e.g., sample size and how they control for false positives. I don't think it's self-evident from that figure alone.
One rule of thumb for interpreting (presumably Pearson) correlation coefficients is given in [0] and states that correlations with magnitude 0.3 or less are negligible, in which case most of the bins in that histogram correspond to cases that aren't considered meaningful.
[0]: https://pmc.ncbi.nlm.nih.gov/articles/PMC3576830/table/T1/
I’m not arguing that there’s something fundamentally wrong with mathematics or the scientific method. I’m arguing that the social norms around how we do science in practice have some serious flaws. Gwern points out one of them. One that IMHO is quite interesting.
EDIT: I also get the feeling that you think it’s okay to do an incorrect hypothesis test (c > 0), as long as you also look at the effect size. I don’t think it is. You need to test the c > 0.3 hypothesis to get a mathematically sound hypothesis test. How many papers do that?
2 replies →