Comment by beagle3
6 months ago
It is not an optimization ; it changes program semantics - converts programs that will run out of stack eventually regardless of the amount of available memory (and raise exceptions an the process, for example, which a program might rely on. Either way, semantics are changed)
It should only be called Tail Call Elimination.
By that standard, any optimization that changes scaling in any dimension changes semantics, which, well, I’m not saying its wrong, but I would say it is exactly what people looking for optimization want.
I disagree.
An optimization that speeds a program by x2 has the same effect as running on a faster CPU. An optimization that packs things tighter into memory has the same effect as using more memory.
Program semantics are usually referred to as “all output given all input, for any input configuration” but ignoring memory use or CPU time, provided they are both finite (but not limited).
TCE easily converts a program that will halt, regardless of available memory, to one that will never halt, regardless of available memory. That’s a big change in both theoretical and practical semantics.
I probably won’t argue that a change that reduces an O(n^5) space/time requirement to an O(1) requirement is a change in semantics, even though it practically is a huge change. But TCE changes a most basic property of a finite memory Turing machine (halts or not).
We don’t have infinite memory Turing machines.
edited: Turing machine -> finite memory Turing machine.
>I probably won’t argue that a change that reduces an O(n^5) space/time requirement to an O(1) requirement is a change in semantics, even though it practically is a huge change
Space/time requirements aren't language semantics though, are they?
3 replies →
> By that standard, any optimization that changes scaling in any dimension changes semantics
That doesn't follow. This isn't like going from driving a car to flying an airplane. It's like going from driving a car to just teleporting instantly. (Except it's about space rather than time.)
It's a difference in degree (optimization), yes, but by a factor of infinity (O(n) overhead to 0 overhead). At that point it's not unreasonable to consider it a difference in kind (semantics).
Modern C compilers are able to transform something like this:
for (int i = 0; i < n; i++) a += i;
To:
a += n * (n+1) / 2;
Is this an optimisation or a change in program semantics? I've never heard anyone call it anything slse than an optimisation.
7 replies →
The important thing is whether theres a garuntee of it happening in particular circumstance or not. Like with python referencing counting theoretically finalizers should be called after you lose all references to a file (assuming no cycles) but you cant rely on it.
Python dicts were in insert sort order for 3.6 but this only became a garuntee as opposed to an implementation choice that could be changed at anyvtime with python3.7
> converts programs that will run out of stack eventually regardless of the amount of available memory (and raise exceptions an the process, for example, which a program might rely on
https://xkcd.com/1172/