Comment by feoren
2 years ago
> your comment takes an incredibly superior attitude and accuses its reader, every reader, of being stupid.
It's also an incredibly superior attitude to think that the discipline of software development is so uniquely special that other subjects, even basic math, have nothing to offer it, and that one could be an effective and productive software developer without having to besmirch your perfect code with concepts from other schools of thought.
And "stupid" would mean "incapable of understanding basic math". This is more like "unwilling to even try". Mere stupidity would be fine: stupid people need jobs too. But a statement that the operation everyone else in the world would use is "unmaintainable" because the programmer is unwilling to refresh themselves on how logarithms work with a quick scan of its Wikipedia article, that's not stupidity. That's bordering on malpractice.
> When taking the log of a number, the value in general require an infinite number of digits to represent.
So does taking a third of a number. So? Do you consider the code "x / 3.0" unmaintainable?
> Computing log(100) / log(10) should return 2.0 exactly, but since log(100) returns a fixed number of digits and log(10) returns a fixed number of digits, are you 100% confident that the ratio will be exactly 2.0?
Exactness was never a requirement. Do you really never use floating point? The reality is that showing "1000 kB" 1% of the time that you should have shown "1.0 MB" is actually fine -- nobody cares, everyone understands what it means -- which applies almost all floating point imprecision. It's important to know when it does matter, but it usually doesn't. It's important for a professional to know when to not care. How much of your client's money are you going to spend on worrying about tiny details that they don't care about?
> Are you confident that such a calculation will also work for any power of 10? Maybe they all work on this intel machine -- does it work on every Arm CPU? Every RISCV CPU? Etc. I wouldn't be, but if I wrote dumb "for" loop I'd be far more confident that I'd get the right result in every case.
Except a 0.00001% imprecision doesn't matter for most cases, but an off-by-one error does. For loops are much more common sources of error than logarithms are.
No comments yet
Contribute on Hacker News ↗