Comment by JonChesterfield
1 year ago
ISO C should love this. It's really complicated. It adds ad hoc compile time guards to stuff. C++ doesn't have this yet so they get to lead the way for a moment.
I hate it but whatever - my love for C ended sharply after C99 fno-strict-aliasing anyway.
> C99 fno-strict-aliasing
I'm confused. C99 is a standard of C, while fno-strict-aliasing is a non standard compiler switch for a specific implementation. Did you mean to put those two things next to each other? Especially since that switch appears to mean "violate the standard in a specific way", and that thing to violate (strict aliasing) goes back at least at far back at the original C89 standard.
I basically agree with the flags the linux kernel are using. Those that I'm missing are probably mistakes on my part. The iso standard isn't of much use to me but the language implemented by compilers with various flags certainly is.
Specifically calling out 99 as the one before 11 introduced _Generic where it could have been overloadable, and atomic where it should have been the gcc intrinsics. That feels like a tipping point between making the language better and diverging from reality.
The op actually references an implementation (QAC? presumably a C compiler) which is nice. The current ISO language would have been improved if "has been implemented and some people use it" was a requirement on adding things to the language. I cannot believe anyone programmed with _Generic and thought yeah, this is what I want.
C99 did not introduce the “strict” aliasing rules, C89 says more or less the same thing about that. It’s just that the inexorable (relative) slowing down of RAM and the gradual extinction of low-hanging optimization fruit led compiler authors to seriously consider aliasing-based optimizations at around the same time.
what's wrong with no-strict-aliasing?
That it's not default, and the strict aliasing rules break the illusion that C is "close to the metal" and that "the C programmer is in charge, not the C compiler". This illusion was never really true but it persists.
The default aliasing model interacts extremely poorly with atomic and vector types. It also means "malloc" can't be written in C, which really should be a sign that the language was devolving into nonsense. I don't want to write malloc in asm because ISO think C is a plausible candidate for writing application code.
(I also want to do things like mutate bytes of the machine code but overall I can make peace with doing that in buffers that aren't currently executing. It's very much not allowed to cast some bytes to a function pointer, even if you've got the ABI right, and that's ridiculous)
Well, I am glad to inform you that I am writing a language (working name "Troglodyte") that will have exactly the desired ("do what I wrote, damn it!") semantics with regards to memory allocations, pointer casting etc. specifically for cavemen like me who sometimes want something more high-level than assembler but still low-level enough to be able to accidentally drop a sharp rock on my foot.
It will have machine-native integers, arrays of such integers, arrays of bytes, functions that take/return machine-wide integers, and that's pretty much it for the data types. All arithmetic is two-complement, signed or unsigned as you desire, and pointers are just numbers as well. Quality-of-life features include being able to straight-up include arbitrary binaries as function definitions:
1 reply →
What part in aliasing prevents malloc() to be written in C? All pointers can be cast to char * and back. Casting from void * is also legal.
2 replies →