Comment by MarkusWandel
13 hours ago
Shannon's information theory asssumes optimal coding, but doesn't say what the optimal coding is.
Anyway if a single observer who lies 20% of the time gives you 4 out of 5 bits correct, but you don't know which ones...
And N such observers, where N>2, gives you a very good way of getting more information (best-of-3 voting etc), to the limit, at infinite observers, of a perfect channel...
then interpolating for N=2, there is more information here than for N=1. It just needs more advanced coding to exploit.
I don't have the math for this, but a colleague does, and after spending a few minutes on Matlab came up with about 0.278 bits/flip of channel capacity (Shannon) for the single observer, and I think around 0.451 bits/flip of channel capacity for the dual observers. That's the theoretical capacity with optimal coding. Whatever coding schemes need to be employed to get there, i.e. what redundancy to add to the bit stream... that's the hard part.