Comment by exizt88
6 months ago
> OPERATORS: No precedence, executed left to right, parenthesize as desired. 2+3*10 yields 50.
How do you even come up with this?
6 months ago
> OPERATORS: No precedence, executed left to right, parenthesize as desired. 2+3*10 yields 50.
How do you even come up with this?
PSA: POSIX shells (bash etc) do the same thing for && and ||. `true || false && false; echo $?` will be 1, not 0, because it evaluates `true || false -> true; true && false -> false`, not `false && false -> false; true || false -> true`. Don't assume like I once did that they have precedence like they have in C etc :D
Because it's dead-simple to parse? Remember that not all machines back then had hardware call-stacks.
This approach is, arguably, more readable because it relies on a simple left-to-right evaluation. Programmers don't have to recall the complex, though often familiar, rules of operator precedence.
Would reverse Polish notation be just as easy to parse and interpret?
RPN is slightly easier to parse and interpret but more difficult for most humans to parse and interpret. This is the middle ground that most everyone can quickly and easily adapt to writing and reading but would still be efficient on most any system.
It’s definitely easier to parse, but you can use shunting yard to do operator precedence parsing using very little extra memory and no recursion. I feel like the language is just poorly designed.
To be charitable to its original designers, information was much less easily accessible in the 1960s than today-although the shunting yard algorithm had been published in the research literature in 1961, practitioners working 5-6 years later may plausibly have been unaware of it-it wasn’t like nowadays where they could easily discover it in Wikipedia or by asking an LLM.
1 reply →
I implemented the same in some of my programming languages. If you look into very generic mixfix operators in some languages like Agda, you'll realize that operator precedence is a mess and it feels so much better to get rid of it. Of course, it makes the language unusable as a mainstream language, but it makes so much more logical sense.
Who came up with math precedence? Why is multiplication done first?
The explanation that makes most sense to me is that it's mostly to avoid having to explicitly write out parentheses a lot of the time. Especially for things like polynomials, which are a bunch of multiplied terms added together, eg 3x+2y and not (3*x)+(2*y). And in polynomials you can even drop the explicit multiplication symbol, so it's much neater. And once you've done this for algebra now you have to do it for plain arithmetic as well to make it all match up, and 3*5+2*7 gives the same answer as evaluating the polynomial at 5,7
One could argue it's the logical way, as multiplication is introduced as repeated addition.
Smalltalk does the same thing!
It's easier to parse since you can process it in-order, makes for an easier single pass approach.
Simplicity of implementation?
Not just simplicity-the original implementation was for a very resource-constrained 1960s minicomputer, where a more complex implementation would have slowed the system down even more and left less memory for running the actual business application
Tell me you would have come up with a Pratt parser yourself (or even a parser generator).
I honestly prefer that over complex precedence rules.