Comment by dataflow
6 months ago
> By that standard, any optimization that changes scaling in any dimension changes semantics
That doesn't follow. This isn't like going from driving a car to flying an airplane. It's like going from driving a car to just teleporting instantly. (Except it's about space rather than time.)
It's a difference in degree (optimization), yes, but by a factor of infinity (O(n) overhead to 0 overhead). At that point it's not unreasonable to consider it a difference in kind (semantics).
Modern C compilers are able to transform something like this:
for (int i = 0; i < n; i++) a += i;
To:
a += n * (n+1) / 2;
Is this an optimisation or a change in program semantics? I've never heard anyone call it anything slse than an optimisation.
This will get a bit pedantic, but it's probably worthwhile... so here we go.
> Is this an optimisation or a change in program semantics?
Note that I specifically said something can be both an optimization and a change in semantics. It's not either-or.
However, it all depends on how the program semantics are defined. They are defined by the language specifications. Which means that in your example, it's by definition not a semantic change, because it occurs under the as-if rule, which says that optimizations are allowed as long as they don't affect program semantics. In fact, I'm not sure it's even possible to write a program that would be guaranteed to distinguish them based purely on the language standard. Whereas with tail recursion it's trivial to write a program that will crash without tail recursion but run arbitrarily long with it.
We do have at least one optimization that is permitted despite being prohibited by the as-if rule: return-value optimization (RVO). People certainly consider that a change in semantics, as well as an optimization.
You do have a point. However, if I'm allowed to move the goalposts a little: not all changes in semantics are equal. If you take a program that crashes for certain inputs and turn it into one that is semantically equivalent except that in some of those crashing cases, it actually continues running (as if on a machine with infinite time and/or memory), then that is not quite as bad as one that changes a non-crashing result into a different non-crashing result, or one that turns a non-crashing result into a crash.
With this kind of "benign" change, all programs that worked before still work, and some that didn't work before now work. I would argue this is a good thing.
It can be, especially when you do something undefined the compiler can do all sorts of odd things while transforming code
Amazing it can do that. How does it work?
That definitely does seem to change its semantics to me. I am not a c expert but this surely has problems the previous one doesn’t?
It does change the semantics if n is negative or large enough to cause an overflow. The challenge for the compiler is to somehow prove that neither of those things can happen.
2 replies →