← Back to context

Comment by energy123

1 day ago

It's not, that would be quite the misunderstanding of statistical power.

N being big means that small real effects can plausibly be detected as being statistically significant.

It doesn't mean that a larger proportion of measurements are falsely identified as being statistically significant. That will still occur at a 5% frequency or whatever your alpha value is, unless your null is misspecified.

It's standard to set the null hypothesis to be a measure zero set (e.g. mu = 0 or mu1 = mu2). So the probability of the null hypothesis is 0 and the only question remaining is whether your measurement is good enough to detect that.

But even though you know the measurement can't be exactly 0.000 (with infinitely many decimal places) a priori, you don't know if your measurement is any good a priori or whether you're measuring the right thing.

  • The probability is only zero a.s., it's not zero. That's a very big difference. And hypothesis tests aren't estimating the probability of the null being true, they're estimating the probability of rejecting the null if the null was true.