Comment by nritchie
3 months ago
A handful of the comments are skeptical of the utility of this method. I can tell you as a physical scientist, it is common to make the same measurement with a number of measuring devices of differing precision. (e.g. developing a consensus standard using a round-robin.) The technique Cook suggests can be a reasonable way to combine the results to produce the optimal measured value.
I'm not a physical scientist, but I spend a lot of time assessing the performance of numerical algorithms, which is maybe not totally dissimilar to measuring a physical process with a device. I've gotten good results applying Simple and Stupid statistical methods. I haven't tried the method described in this article, but I'm definitely on the lookout for an application of it now.
I wonder if this minimum variance approach of averaging the measurements agrees with the estimate of the expected value we'd get from a Bayesian approach, at least in a simple scenario, say a uniform prior over the thing we're measuring and assume that our two measuring devices have unbiased errors described by normal distributions.
At least in the mathematically simpler scenario of a gaussian prior and gaussian observations, the posterior mean is computed by weighing by the the inverses of variances (aka precisions) just like this.
https://en.wikipedia.org/wiki/Conjugate_prior
To add, for anyone who's followed the link - that's entries numbers 1 and 2, or "Normal with known variance σ²" and Normal with known precision τ", under "When likelihood function is a continuous distribution".
Also, note that the "precision" τ is defined as 1/σ².
This seems to be incorrect. The correct way to combine measurements with various degree of precision is to use the inverse variance weighting law
Unless I’m missing something that’s exactly what is proposed:
t_i Var [X_i]] = t_j Var [X_j]
Like a Kalman filter?