← Back to context

Comment by piadodjanho

6 years ago

To see 0.1 error using _double_ you have to do at least 2*10^17 operations (assuming the worst case scenario and no subnormals).

If you are working with such huge numbers, 0.1 cents is probably a cost you are willing to pay to avoid expending thousands in a software solution. The saving with power saving using a floating point is likely greater than power your computers will have to expend to get a precise solution.

You can get a larger error than that using one operation.

  fn main() {
      let x: f64 = 9007199254740992.0;
      assert_eq!(x + 1.0, x);
  }

  • You are absolutely right.

    When adding numbers with large magnitudes differences (around 10^17 I think) it might exceed the format precision. I should have taken that in account when defining the error boundaries.

    In dollars, you start having issues with cents when working with a tens of trillions.

    For the vast majority of people this won't be an issue.

I can give you 1.0 error. Take a handful of numbers that add up to 1.5, sum them, and then round that result to the nearest unit.

I'm too lazy to figure out a specific example, but sets of numbers where doubles round up and decimals round down (or vice versa) aren't terribly uncommon.