Comment by sclangdon

2 years ago

It's not that we think it's arcane or that we are in our own "bubbles of thought", it's that we aren't doing math. We're programming a computer. And a competent programmer would know, or at least suspect, that doing it with logarithms will be slower and more complicated for a computer. The author even points out that even he wouldn't use his solution.

P.S. Please look up the word literally.

I'm having a hard time imagining a situation where "printing out the number in a human readable format" is more time consuming than "figuring out what the number is".

I think a competent programmer might also ask themselves "am I prematurely optimizing?" if their first instinct is to pick the method that only works on a computer. I've operated in this space long enough that bit shifting is synonymous with doing the logarithm in my mind, but if I had to explain how my code works, I would use the logarithm explanation. I would be sure to point out that the computer does log (base 2) of a number much much MUCH faster than any other base.

Its probably excessive to say that literally every one is taught logarithms as the ideal solution to this problem, but logarithms are almost universally introduced by explaining that the log (base 10) of a number is always greater than or equal to the number of digits in that base 10 number. So if you completed a high school education in the United States, you have almost certainly heard that much at least.

edit: printing out the number is almost always gonna be faster than figuring out the value of the number, if the speed of the operation matters. My original post implied the opposite. Part of being a competent programmer is recognizing that optimizing is sometimes bikeshedding.

The author's final suggested solution at the bottom of the article still relies on logarithms.

> doing it with logarithms will be slower and more complicated for a computer

This is a fascinating point of view and while it isn't wrong in certain "low-level optimization golf" viewpoints is in part based on old wrong assumptions from early chipsets that haven't been true in decades. Most FPUs in modern computers will do basic logarithms in nearly as many cycles as any other floating point math. It is marvelous technology. That many languages wrap these CPU features in what look like library function calls like Math.log() instead of having some sort of "log operator" is as much an historic accident of mathematical notation and that logarithms were extremely slow for a human.

Logarithms used to be the domain of lookup books (you might have one or more volumes, if not a shelf-full) and was one of the keys to the existence of slide rules and why an Engineer would actually have a set of slide rules in different logarithmic bases. Mathematicians would spend lifetimes doing the complex calculations to fill a lookup book of logarithmic data.

Today's computers excel at it. Early CPU designs saved transistors and made logarithms a domain of application/language design. Some of the most famous game designs did interesting hacks of pre-computing logarithm tables for a specific set of needs and embedding them in ROM in useful memory versus CPU time trade-offs. Today's CPU designs have plenty of transistors and logarithm support in hardware is just about guaranteed. (That's just CPU designs even; GPU designs can be logarithmic monsters in how many and how fast they can do.)

Yesterday's mathematicians envy the speed at which a modern computer can calculate logarithms.

In 2023 if you are trying to optimize an algorithm away from logarithms to some other mix of arithmetic you are either writing retro games for a classic chipset like the MOS 6502, stuck by your bosses in a history-challenged backwards language such as COBOL, or massively prematurely optimizing what the CPU can already better optimize for you. I wish that was something any competent programmer would know or at least suspect. It's 2023, it's okay to learn to use logarithms like a mathematician, because you aren't going to need that "optimization" of bit shifts and addition/subtraction/multiplication/division that obscures what your actual high-level algorithmic need and complexity is.