Comment by BigTTYGothGF

4 days ago

Gallant always uses TRNGs. Goofus always uses a high-quality PRNG (CSPRNG if you like) that's seeded with a TRNG. Everything else they do is identical. What are circumstances under which Goofus's conclusions would be meaningfully different than Gallant's?

Suppose I'm doing something where I need N(0,1) random variates. I sample from U(0,1) being sure to use a TRNG, do my transformations, and everything's good, right? But my sample isn't U(0,1), I'm only able to get float64s (or float32s), and my transform isn't N(0,1) as there's going to be some value x above which P(z>x)=0. The theory behind what I'm trying to do assumes N(0,1) and so all my p-values are invalid.

Nobody cares about that because we know that our methods are robust to this kind of discretization. Similarly I think nobody (most people) should care (too much) about having "only" 256 bits of entropy in their PRNG because our methods appear to be robust to that.