Comment by donnerdave
7 hours ago
No one commenting on the inaccuracy (or at least imprecision) of the Python output cited in the article??
"A:T, B:T - chances - H 6.0% | T 94.0% | occurs 34.0% of the time"
By the simplest of math for unrelated events, the chance of both A & B lying about the coin is 20% of 20%, or .2 * .2 = 0.04, or 4.0% ...
The "Let's prove it" section contains the correct analysis, including that our chance of being correct is 80% with two friends.
The code output for three players is similarly flawed, and the analysis slight misstates our chance of being correct as 90.0% (correctly: 89.6%).
Or am I missing something about the intent or output of the Python simulation?
This is an interesting question!
But no, the python output is correct (although I do round the values). It's counterintuitive but these are two different questions:
Trivially, the answer for question (1) is 0.2 * 0.2 = 4%
The answer for question (2) is 0.02 / 0.34 = 6%
One way of expressing this is Bayes Rule: we want P(both say tails | coin is heads):
This gives us (0.04 * 0.5) / 0.34 = 0.02 / 0.34 ~= 6%
I think that might not be convincing to you, so we can also just look at the results for a hypothetical simulation with 2000 flips:
We're talking about "the number of times they lie divided by the number of times that they agree"
40 / 680 ~= 6%
We go from 4% to 6% because the denominator changes. For the "how often do they both lie" case, our denominator is "all of our coin flips." For the "given that they both said tails, what are the odds that the coin is heads" case, our denominator is "all of the cases where they agreed" - a substantially smaller denominator!
The three players example is just me rounding 89.6% to 90% to make the output shorter (all examples are rounded to two digits, otherwise I found that the output was too large to fit on many screens without horizontal scrolling).