Comment by stavros
2 days ago
> Programs can be very close to 100% reliable when made well.
This is a tautology.
> I've never seen a calculator come up with the wrong answer when adding two numbers.
> And technically, most bugs are predictable in theory, they just aren't known ahead of time.
When we're talking about reliability, it doesn't matter whether a thing can be reliable in theory, it matters whether it's reliable in practice. Software is unreliable, humans are unreliable, LLMs are unreliable. To claim otherwise is just wishful thinking.
You mixed up correctness and reliability.
The ios calculator will make the same incorrect calculation, but reliably, every time.
Don't move the goalposts. The claim was:
> I've never seen a calculator come up with the wrong answer when adding two numbers.
1.00000001 + 1 doesn't equal 2, therefore the claim is false.
That's a known limitation of floating point numbers. Nothing buggy about that.
1 reply →
1.00000001f + 1u does equal 2f.
Sorry, but this annoys me. The claim might be false if I had made it after seeing your screenshot. But you don't know what I've seen in my life up to that point. The claim that all calculators are infallible would be false, but that's not the claim I made.
When a personal experience is cited, a valid counterargument would be "your experience is not representative," not "you are incorrect about your own experience."
4 replies →
That's not a tautology. You said "programs are the most reliable, though far from 100%"; they're just telling you that your upper bound for well-made programs is too low.
RE: the calculator screenshot - it's still reliable because the same answer will be produced for the same inputs every time. And the behavior, though possibly confusing to the end user at times, is based on choices made in the design of the system (floating point vs integer representations, rounding/truncating behavior, etc). It's reliable deterministic logic all the way down.
> I've never seen a calculator come up with the wrong answer when adding two numbers.
1.00000001 + 1 doesn't equal 2, therefore the claim is false.
Sure it does, if you have made a system design decision about the precision of the outputs.
At the precision the system is designed to operate at, the answer is 2.
> > Programs can be very close to 100% reliable when made well. > This is a tautology.
No it's not. There are plenty of things that can't be 100% reliable no matter how well they're made. A perfect bridge is still going to break down and eventually fall apart. The best possible motion-activated light is going to have false positives and false negatives because the real world is messy. Light bulbs will burn out no matter how much care and effort goes into them.
In any case, unless you assert that programs are never made well, then your own statement disproves your previous statement that the reliability of programs is "far from 100%."
Plenty of software is extremely reliable in practice. It's just easy to forget about it because good, reliable software tends to be invisible.
> No it's not. There are plenty of things that can't be 100% reliable no matter how well they're made. A perfect bridge is still going to break down and eventually fall apart. The best possible motion-activated light is going to have false positives and false negatives because the real world is messy. Light bulbs will burn out no matter how much care and effort goes into them.
All these failure modes are known and predicable, at least statistically
If you're willing to consider things in aggregate then software is perfectly predictable too.