Comment by stefanha
2 years ago
Can someone explain or share references on how HVM (or GHC or similar computational models) beat optimized compiled imperative languages like C, C++, or Rust?
I took a quick look at the HVM runtime and can vaguely make out the shape of the memory and code that reduces terms. While it's cool to have a computational model that evaluates everything only once and avoids duplicating computation, if it does not compile programs to machine code to exploit hardware, then the single-core performance will be worse than C/C++/Rust.
Something that generates a few machines instructions in C/C++/Rust requires reducing a graph of terms - many CPU instructions for each reduction and many memory accesses that may have poor locality.
Is the idea that programs contain enough parallelism that is out of reach of human programmers so that HVM will make up for the single-core performance drag through parallelism?
Maybe I'm missing what the HVM compiler does?
Maybe hardware acceleration with FPGAs or ASICs could be applied?
In other words, can this beautiful and elegant computational model be implemented with efficiency that matches C/C++/Rust?
Where have you read that HVM beats C or Rust? That is not something I've ever written and we're far from that, specially considering the budget that has been thrown into Rust, although we could (and hope to) get there one day. This is possible in theory, because interaction nets are as lightweight as Rust, but have the added benefit of built-in parallelism. Please see the whole FAQ, I think it addresses all your questions. Let me know if you have any further questions.
Thanks for the reply. The FAQ doesn't really answer my specific questions, but I understand that HVM is still young and there aren't answers to everything yet.
The FAQ mentions that every reduction rule can be compiled to machine code in theory. That's the core of what I'm asking - can reduction be implemented efficiently? I suspect it's not enough to have fast reduction rules, because that still requires a runtime that spends CPU cycles and memory accesses on reduction housekeeping. To be fast you need to minimize explicit reduction at runtime because that's not what CPUs are good at.
I guess an aggressive HVM compiler would find sub-programs that can be treated as leaf nodes and replaced entirely with compiled machine code. The machine code doesn't use interaction nets but it computes an equivalent result to the interaction net. This doesn't mean that reduction and interaction nets aren't used, they still serve a purpose for lazy evaluation and parallelism at higher levels of the program. The compiler would have to figure out where to draw the line between the high-level reduction runtime and the leaf nodes.
Anyway, this is just what comes to mind. I don't really know how HVM works, but thanks for sharing it.
It's a scam. Don't bother.