Comment by KeplerBoy
4 days ago
There are some really simple examples of that. Just try adding 1 to a half precision float in a loop. The accumulator will stop increasing at a mere 2048, since 2049 is not representable it rounds 2048 + 1 back down to 2048.
So as the sum grows with the square, the error function grows in the same magnitude. A very interesting excersise. I'll get an AI to choke and puke on that.