Comment by pizlonator
10 days ago
Code size overhead is really bad right now, but I wouldn't read anything into that other than "Fil didn't optimize it yet".
Reasons why it's stupidly bad:
- So many missing compiler optimizations (obviously those will also improve perf too).
- When the compiler emits metadata for functions and globals, like to support accurate GC and the stack traces you get on Fil-C panic, I use a totally naive representation using LLVM structs. Zero attempt to compress anything. I'm not doing any of the tricks that DWARF would do, for example.
- In many cases it means that strings, like names of functions, appear twice (once for the purposes of the linker and a second time for the purposes of my metadata).
- Lastly, an industrially optimized version of Fil-C would ditch ELF and just have a Fil-C-optimized linker format. That would obviate the need for a lot of the cruft I emit that allows me to sneakily make ELF into a memory safe linker. Then code size would go down by a ton
I wish I had data handy on just how much I bloat code. My totally unscientific guess is like 5x
No comments yet
Contribute on Hacker News ↗