← Back to context

Comment by maxbond

1 day ago

You will have an error rate of less than or equal to 1%. You can't average two measurements and get a result with a higher error rate than the worst of the original measurements had.

You wouldn't be well served by averaging a measurement with a 1% error and a measurement with a 90% error, but you will have still have less than or equal to 90% error in the result.

If the errors are correlated, you could end up with a 1% error still. The degenerate case of this is averaging a measurement with itself. This is something clocks are especially prone to; if you do not inertially isolate them, they will sync up [1]. But that still doesn't result in a greater error.

You could introduce more error if you encountered precision issues. Eg, you used `(A+B)/2` instead of `A/2 + B/2`; because floating point has less precision for higher numbers, the former will introduce more rounding error. But that's not a function of the clocks, that's a numerics bug. (And this is normally encountered when averaging many measurements rather than two.)

There are different ways to define error but this is true whether you consider it to be MSE or variance.

[1] https://www.youtube.com/watch?v=T58lGKREubo

My reasoning is that a clock is either right or wrong.

The average of a right and a wrong clock is wrong. Half as wrong as the wrong one, but still wrong.

If this is a good mental model for dealing with clock malfunctions depends on the failure modes of the clocks.