Comment by pushrax
6 years ago
Storing money in floating point is fine. Just round to the nearest atomic unit when displaying. Sometimes this is a necessity when working with money in e.g. existing JSON APIs. You lose a few bits of range relative to fixed point storage but it's almost never a practical issue.
Performing arithmetic operations against money in floating point is the dangerous part, as error can accumulate beyond an atomic unit.
> Performing arithmetic operations against money in floating point is the dangerous part, as error can accumulate beyond an atomic unit.
A good example of this is trying to compute the sales tax on $21.15 given a tax rate of 10%. The exact answer would be $2.115, which should round to $2.12.
IEEE 64-bit floating point gives 2.1149999999999998, which is hard to get to round to 2.12 without breaking a bunch of other cases.
Here are three functions that try to compute tax in cents given an amount and a rate, in ways that seem quite plausible:
On these four problems:
the right answers are 22, 65, 129, and 212. Here are what those give:
Note that none of the get all four right.
I did some exhaustive testing and determined that storing a money amount in floating point is fine. Just convert to integer cents for computation. Even though the floating point representation in dollars is not exact, it is always close enough that multiplying by 100 and rounding works.
Similar for tax rates. Storing in floating point is fine, but convert to an integer by multiplying by an appropriate power of 10 first. In all the jurisdictions I have to deal with, tax rate x 10000 will always be an integer so I use that.
Give amt and rate, where amt is the integer cents and rate is the underlying rate x 10000, this works to get the tax in cents:
I'm not fully convinced that you cannot do all the calculations in floating point, but I am convinced that I can't figure it out.
> IEEE 64-bit floating point gives 2.1149999999999998, which is hard to get to round to 2.12 without breaking a bunch of other cases.
Your issue is on how to print the float, not with the precision of fp. For instance, `21.15 * 0.1` can be print both as 2.115 or 1.12 depending on how many decimal digits of precision you set your print function. I manage to get those results with printf using `%.3f` and `%.2f`, respectively.
To produce one cent (0.0x) error with the default FP rounding, it takes more than 1 Quadrillion of operation. Each operation can only introduce 1*10^17/2 error.
The "you shouldn't be using float to do monetary computation" is likely one the most spread float point misinformation.
The issues with your others examples is that you are rounding the data (therefore, discarding information). If you don't do any manual round, the result should be correct (I haven't test thought).
> Your issue is on how to print the float, not with the precision of fp. For instance, `21.15 * 0.1` can be print both as 2.115 or 1.12 depending on how many decimal digits of precision you set your print function. I manage to get those results with printf using `%.3f` and `%.2f`, respectively.
I get 2.115 with %.3f and 2.11 with %.2f. Here's my test program. Same result on my Mac with clang and my Debian 8 server with gcc.
3 replies →
The trick is that by default rounding happens using banker's rounding. Programming languages use this because this is what CPUs use. When you want to round your way, you need an extra digit and round manually:
That works for 10% of $21.15, giving the desired 212.
However, for 10.14% of $21.15, it gives 215, but it should be 214. Another example is 3.5% of $60.70, for which it gives 213 but correct is 212.
2 replies →
> Storing money in floating point is fine. Just round to the nearest atomic unit when displaying.
Well, it's not just a display issue. In accounting, associativity and commutativity are important. People do care that `a + b + c - a == c + b` should evaluate to “true”.
It appears you did not see the critical point in the above comment. "Performing arithmetic operations against money in floating point is the dangerous part, as error can accumulate beyond an atomic unit."
You’re right, I missed that. If you’re not going to do any arithmetic, you might as well store them as strings.
There's very little point in storing money in floats if you're not going to do arithmetic in floats; about the only use case I can think of is JavaScript and JSON APIs.
Aside from the cases you mentioned, there are other dynamic languages in which numbers are by default floating point. e.g. Lua. I agree though.