Comment by tovej
12 hours ago
I don't know what world you live in, but I still definitely need to know the approximation error of the methods I use.
sin(x) has one of the simplest Maclaurin series:
sin(x) = x - x^3/3! + x^5/5! - x^7/7! ...
For any partial sum of that series, the error is always strictly less than the absolute value of the next term in the series. The fact that this was your example of a "difficult" engineering problem is uh, embarrassing.
For good measure, I would of course fuzz any component involving numerical methods to ensure it stays within bounds. _As any competent engineer would_.
And I absolutely work things out on pen and paper or a white board before implementing them. How else would I verify designs? I'm sure you're aware that fixing bugs is cheapest in the design phase.
Are you living in an alternate reality where software quality does not matter? I'm still living in the world where engineers need to know what the fuck they're doing.
On whose arithmetic?
You’re just showing me the blackboard approximation. How about just on x86? What are the bounds and how do you know?
Oh, IEEE 754 double precision floating point accuracy? Rule of thumb is 17 digits. You will probably get issues related to catastrophic cancellation around x=0. As I said earlier the easiest solution is just to measure in this case. You don't really need to fuzz a sine approximation, you can scan over one period and compare against exactly calculated tables. I would probably add a cutoff around zero and move to a linear model if there is cancellation issues.
And if the measurement shows the approximation has too much floating point error, you can always move to Kahan sums or quad precision. This comes up fairly often.
If I really had to _prove_ formally an exact error bound, that would take me some time. This is not something you would be likely to have to do unless you're building software for airplanes, or some other safety critical domain. And an LLM would absolutely not be helpful in that case. You would use formal verification methods.
"Oh, IEEE 754 double precision floating point accuracy?"
Ok, so we do agree! You DON'T want to go back to a system where everyone had to do their own arithmetic just to make a program! That's fabulous. I'm glad that we're in agreement.
It's it SO MUCH NICER to just have the vagaries of one arithmetic we've already agreed upon to deal with, instead of needing to become an expert in numerical analysis just to get along with things.