← Back to context

Comment by pixelfarmer

2 days ago

If you have a large set of lets say floats in the range between 0 and 1 and you add them up, there is the straightforward way to do it and there is a way to pair them all up, add the pairs, and repeat that process until you have the final result. You will see that this more elaborate scheme actually gets a way more accurate result vs. simply adding them up. This is huge, because there is a lot of processing done with floats where this inherent issue is entirely disregarded (and has an impact on the final result).

Donald Knuth has all of that covered in one of his "The Art of Computer Programming" books, with estimations about the error introduced, some basic facts like a + (b + c) != (a + b) + c with floats and similar things.

And believe it or not, there have been real world issues coming out of that corner. I remember Patriot missile systems requiring a restart because they did time accounting with floats and one part of the software didn't handle the corrections for it properly, resulting in the missiles going more and more off-target the longer the system was running. So they had to restart them every 24 hours or so to keep that within certain limits until the issue got fixed (and the systems updated). There have been massive constructions breaking apart due to float issues (like material thicknesses calculated too thin), etc.

There are some really simple examples of that. Just try adding 1 to a half precision float in a loop. The accumulator will stop increasing at a mere 2048, since 2049 is not representable it rounds 2048 + 1 back down to 2048.