← Back to context

Comment by WalterBright

9 hours ago

> What do you mean by respect?

The disinterest programmers have in using 80 bit arithmetic.

A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors. More bits would put off the cliff where the results turned into gibberish.

I know there are techniques to minimize this problem. But they aren't simple or obvious. It's easier to go to higher precision. After all, you have the chip in your computer.

> The disinterest programmers have in using 80 bit arithmetic.

I don't know, other than to say there's often a tendency in this industry to overlook the better in the name of the standard. 80-bit probably didn't offer enough marginal value to enough people to be worth the investment and complexity. I also wonder how much of an impact there is to the fact that you can't align 80-bit quantities on 64-bit boundaries. Not to mention the fact that memory bandwidth costs are 25% higher when dealing with 64-bit quantities, and floating point work is very often bandwidth constrained. There's more precision in 80-bit, but it's not free, and as you point out, there are techniques for managing the lack of precision.

> A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors.

This sort of thing shows up in even the most prosaic places, of course:

https://blog.codinghorror.com/if-you-dont-change-the-ui-nobo...

In any event, while we're chatting, thank you for your longstanding work in the field.