Comment by selridge
18 hours ago
Was it a good thing for anyone writing software which included those things to need to not only work out how they are on a blackboard but how they are on the real machine in question? And how they are on the next machine over?
Do you yearn to return to that world? I suspect most people don't. It's not just knowing your own machine, but any machine the code could run on. It's also not just reaching for some 2nd year bachelor topics when the matter at hand is much more complicated. Where does your sine approximation fail? How do you know? Can you prove that? Does the compiler or the hardware decide to do things behind your back which vitiate any of those claims?
Knowing the answer to that all every time you need a sine is not something 99.99% of engineers need to worry about. IT USED TO BE. But now it's not. No one is going back to that.
I don't know what world you live in, but I still definitely need to know the approximation error of the methods I use.
sin(x) has one of the simplest Maclaurin series:
sin(x) = x - x^3/3! + x^5/5! - x^7/7! ...
For any partial sum of that series, the error is always strictly less than the absolute value of the next term in the series. The fact that this was your example of a "difficult" engineering problem is uh, embarrassing.
For good measure, I would of course fuzz any component involving numerical methods to ensure it stays within bounds. _As any competent engineer would_.
And I absolutely work things out on pen and paper or a white board before implementing them. How else would I verify designs? I'm sure you're aware that fixing bugs is cheapest in the design phase.
Are you living in an alternate reality where software quality does not matter? I'm still living in the world where engineers need to know what the fuck they're doing.
On whose arithmetic?
You’re just showing me the blackboard approximation. How about just on x86? What are the bounds and how do you know?
Oh, IEEE 754 double precision floating point accuracy? Rule of thumb is 17 digits. You will probably get issues related to catastrophic cancellation around x=0. As I said earlier the easiest solution is just to measure in this case. You don't really need to fuzz a sine approximation, you can scan over one period and compare against exactly calculated tables. I would probably add a cutoff around zero and move to a linear model if there is cancellation issues.
And if the measurement shows the approximation has too much floating point error, you can always move to Kahan sums or quad precision. This comes up fairly often.
If I really had to _prove_ formally an exact error bound, that would take me some time. This is not something you would be likely to have to do unless you're building software for airplanes, or some other safety critical domain. And an LLM would absolutely not be helpful in that case. You would use formal verification methods.
1 reply →