← Back to context

Comment by BurningFrog

1 day ago

To spell out the point:

If the chronometer error rate is 1%, averaging two will give you a 2% error rate.

You will have an error rate of less than or equal to 1%. You can't average two measurements and get a result with a higher error rate than the worst of the original measurements had.

You wouldn't be well served by averaging a measurement with a 1% error and a measurement with a 90% error, but you will have still have less than or equal to 90% error in the result.

If the errors are correlated, you could end up with a 1% error still. The degenerate case of this is averaging a measurement with itself. This is something clocks are especially prone to; if you do not inertially isolate them, they will sync up [1]. But that still doesn't result in a greater error.

You could introduce more error if you encountered precision issues. Eg, you used `(A+B)/2` instead of `A/2 + B/2`; because floating point has less precision for higher numbers, the former will introduce more rounding error. But that's not a function of the clocks, that's a numerics bug. (And this is normally encountered when averaging many measurements rather than two.)

There are different ways to define error but this is true whether you consider it to be MSE or variance.

[1] https://www.youtube.com/watch?v=T58lGKREubo

  • My reasoning is that a clock is either right or wrong.

    The average of a right and a wrong clock is wrong. Half as wrong as the wrong one, but still wrong.

    If this is a good mental model for dealing with clock malfunctions depends on the failure modes of the clocks.

    • This is not how continuous probabilities work. The probability that a clock is exactly right is zero; hence there is always some error in a measurement of time. Adding additional clocks will always cause the error to be less or equal to the maximum error.

No, not at all.

The result in the original article only applies when there are discrete choices. For stuff you can actually average, more is always better.

Oh, and even with discrete choices (like heads vs tails), if you had to give a distribution and not just a single highest likelihood outcome, and we'd judge you by the cross-entry, then going from one to two is an improvement. And going from odd n to the next even n is an improvement in general in this setting.