Comment by doodlebugging
4 days ago
I'll accept your math this morning with the note that without checking the other 90/100 answers you have no way outside of accepting probabilities to know whether your random examples are representative of the quality of the full set. It becomes a "trust me bro" situation.
I processed hundreds of thousands of miles of seismic data in my career. The first thing we needed to do for any processing project was to select a subset of the data for use in defining the parameters that would be used to process the full volume of data. We used brute stacks - a preliminary output in the process - to locate interesting areas with complex attributes to make sure we could handle the volume's complexities. In industry slang these were "carefully selected random examples".
It was inevitable that we would find a situation during the processing for which our parameters were not optimized because we had missed that edge case in selecting the examples used for testing.
In the same way in real life if you only demonstrably know that 10% of the test answers are correct then it is also true that some edge case in the 90% of untested answers could leave all or part of that untested space false or sub-optimum.
If there was a point to my original post it is this. You only know something is true when you have proven it to be true. A maximum likelihood of truth is not a guarantee of truth it is only a guarantee that it is likely to be true. You won't know how sharp the sting will be until you peel the onion.
Having checked an answer also doesn't guarantee anything with certainty. Which, coincidentally, is the actual topic of the OP, the grading was just a tangential example and not a description of something actually happening anywhere in practice. But sticking with the grading example, the failure modes could be much more benign, such as the examiner failing to spot a mistake, or the answer being copied,...
Probabilities aren't a matter of faith, they're mathematics and as such represent a logical trueism. Your critiques are just nitpicking for the sake of it and void of substance. Have a coffee and leave this topic.
>they're mathematics and as such represent a logical trueism
I'm a full pot in now and find that I agree with this.
The fact is though that the article demonstrates that something previously considered logically true or a maximum likelihood was proven false.
It's great to see that there are people, whether they're mathematicians or cryptographers, who will take a look at something that has been a useful part of a stable process of verification and try to find cracks or instabilities. The fact that they found an edge case that could be exploitable undermines trust in an important part of the process.
Trust - but verify, wins again. Logically, this is the best way.
> You only know something is true when you have proven it to be true. A maximum likelihood of truth is not a guarantee of truth it is only a guarantee that it is likely to be true.
Better not use cryptographic signatures then