Comment by woodruffw
2 years ago
I think what you’re doing with Fil-C is cool, but I wouldn’t call a 200x slowdown a “small change.”
One of the interesting things that Rust has demonstrated is that you don’t have to choose between performance and safety and, in fact, that safety improvements in languages can actually result in faster programs (e.g. due to improved alias analysis). New technology/sexiness advantage aside, I think this is a significant driver of adoption.
> I think what you’re doing with Fil-C is cool, but I wouldn’t call a 200x slowdown a “small change.”
If you're bringing up the 200x, then you don't get what's going on.
It's extremely useful right now to have a compiler that's substantially correct so I don't have to deal with miscompiles as I grow the corpus.
Once I have a large enough corpus of tests, then I'll start optimizing. Writing compiler optimizations incrementally on top of a totally reliable compiler is just sensible engineering practice.
So, if you think that 200x is meaningful, then it's because you don't know how language/compiler development works, you haven't read my manifesto, and you have no idea where the 200x is coming from (hint: almost all optimizations are turned off for now so I have a reliable compiler to grow a corpus with).
> One of the interesting things that Rust has demonstrated is that you don’t have to choose between performance and safety and, in fact, that safety improvements in languages can actually result in faster programs (e.g. due to improved alias analysis). New technology/sexiness advantage aside, I think this is a significant driver of adoption.
You have to rewrite your code to use Rust. You don't have to rewrite your code to use Fil-C. So, Rust costs more, period. And it costs more in exactly the kind of way that cannot be fixed. Fil-C's perf can be fixed. The fact that Rust requires rewriting your code cannot be fixed.
We can worry about making Fil-C fast once there's a corpus of stuff that runs on it. Until then, saying speed is a shortcoming of Fil-C is an utterly disingenuous argument. I can't take you seriously if you're making that argument.
> So, if you think that 200x is meaningful, then it's because you don't know how language/compiler development works, you haven't read my manifesto, and you have no idea where the 200x is coming from (hint: almost all optimizations are turned off for now so I have a reliable compiler to grow a corpus with).
I actually did, the first day you made it public. A friend also sent it to me because you link my blog in it. Again, I think it's cool, and I'm going to keep following your progress, because I think Rust alone is not a panacea.
I've worked on and in LLVM for about 5 years now (and I've contributed to a handful of programming languages and runtimes over the past decade), so I feel comfortable saying that I know a bit about how compilers and language development work. Not enough to say that I'm an infallible expert, but enough to know that it's very hard to claw back performance when doing the kinds of things you're doing (isoheaps, caps). Isotyped heaps, in particular, are a huge pessimization on top of ordinary heap allocation, especially when you get into codebases with more than a few hundred unique types[1].
To be clear: I don't think performance is a sufficient reason to not do memory safety. I've previously advocated for people running sanitizer-instrumented binaries in production, because the performance hit is often acceptable. But again: Rust gets you both performance and safety, and is increasingly the choice for shops that are looking to migrate off of their legacy codebases anyways. It's also easier to justify training a junior engineer to write safe code that can be integrated into a pre-existing codebase.
> You don't have to rewrite your code to use Fil-C.
If I read correctly, you provide an example of an enum below that needs to be rewritten for Fil-C. That's probably an acceptable tradeoff in many codebases, but it sounds like there are well-formed C programs that Fil-C currently rejects.
[1]: https://security.apple.com/blog/towards-the-next-generation-...
> I've worked on and in LLVM for about 5 years now (and I've contributed to a handful of programming languages and runtimes over the past decade), so I feel comfortable saying that I know a bit about how compilers and language development work. Not enough to say that I'm an infallible expert, but enough to know that it's very hard to claw back performance when doing the kinds of things you're doing (isoheaps, caps). Isotyped heaps, in particular, are a huge pessimization on top of ordinary heap allocation, especially when you get into codebases with more than a few hundred unique types[1].
Isoheaps suck a lot more in kernel than they do in user. I don't think it's accurate to say that isoheaps are a "huge pessimization". It's not huge, that's for sure.
For sure, right now, memory usage of Fil-C is just not an issue. The cost of isoheaps is not an issue.
Also, Fil-C is engineered to allow GC, and I haven't made the switch because there are some good reasons not to do it. That's an example of something where I want to pick based on data. I'll pick GC or not depending on what performs better and is most ergonomic for folks, and that's the kind of choice best made after I have a massive corpus.
> If I read correctly, you provide an example of an enum below that needs to be rewritten for Fil-C. That's probably an acceptable tradeoff in many codebases, but it sounds like there are well-formed C programs that Fil-C currently rejects.
Yeah but it's not a rewrite.
If you want to switch to Rust, it's not a matter of changing a union - it's changing everything.
If you want to switch to Fil-C, then yeah, some of your unions, and most of your mallocs, will change.
For example, it took about two-three weeks working about 2hrs/day to convert OpenSSH to the point where the client works. I don't think you'd be able to rewrite OpenSSH in Rust on that kind of schedule.
3 replies →
Do you have a forecast as to what the slowdown will be after optimizations are implemented? 20x? 2x? 1.2x? 1.02x?
Thanks.
somewhere between 1.02x and 2x