Comment by qazxcvbnm
2 years ago
Hi, I've spent quite an amount of time digging into papers on the computational models which Victor has linked to.
I do notice Victor's tendency to be hyperbolic and defensive, which is quite unfortunate, and I don't know or care much about the blockchain that Victor is also running. (By the way, I think that Victor's real name is probably Victor Maia rather than Taelin, as you can corroborate from a number of sources; it seems Taelin is just his handle on Github. It seems that Victor used to work at Ethereum Foundation with the title of Researcher https://bobsummerwill.com/ethereum-foundation-people/?)
On the other hand, the computational model that Victor's HVM is built on was invented by Yves Lafont, and strongly inspired by Jean Yves Girard (who is known for Linear Logic, Girard's paradox, and System-F (of which, according to Wikipedia "forming a theoretical basis for languages such as Haskell")). I can assure you that Lafont and Girard are logicians of the highest caliber, and indeed the promises that HVM intends to fulfill are the same notions envisioned by Girard and computer scientists working on Geometry of Interaction and Interaction Nets. If you spend a few minutes to consider the case on the README where HVM claims to outperform GHC (e.g. regarding Lambda Multiplication), it should not be hard to understand why it is possible in certain cases for HVM to outperform essentially anything else, and the potential that improvements in computational models can bring.
In any case, Victor has indeed put a very promising computational model to serious optimisation, and that alone is worth earnest commendation. Reading in the details on HVM will make it clear that although it works today, it is still very much a prototype in-progress. Personally, I think the computational challenges that still need to be tackled to bring Interaction Nets to life remain intractable (as do Asperti and Guerrini, in IIRC chapter 10 of The Optimal Implementation of Functional Programming Languages; Victor mentions basing his implementation on Asperti and Guerrini's work, which makes it surprising that Victor is surprised that optimal reduction has edge cases), but any serious effort in this direction is badly needed and should probably be encouraged. If VCs are putting their money behind bringing Girard's ideas to popular imagination, then perhaps for once they are doing good.
Thanks for the balanced feedback. It's not too far from the impression I got. I do take Lafont and Girard seriously and have greatest respect for them.
Just to further clarify on the reasons why I think that work into such computational models are badly needed; I think that comprehensible systems fundamentally possess a kind of impedance mismatch with natural forms of expression within von Neumann architecture (as noted elsewhere in the comments). For many years, functional programmers and computer scientists have looked forward to the immense performance potential of automatic parallelisation of pure terms over imperative operations, due to explicit declarations of data dependencies - but despite the passing of decades, it is yet to be fulfilled. Probably no one would consider functional languages for the most performance sensitive of tasks.
As is well known in functional programming circles, mutation and non-referentially transparent operations make systems non-composable and incomprehensible. Pure, composable computation can generally be roughly modelled as terms in lambda calculus. However, elegant as lambda calculus may be as a computational model for computer scientists, to a rough approximation, CPU instructions resemble more of Turing machines. To a very rough approximation, we can take each CPU instruction to cost O(1) time, but on the other hand, lambda reduction takes a much more complex and variable cost depending on your term and reduction strategy, and is highly non-local. The potential of Interaction Nets lies in part, in allowing evaluation of lambda terms in essentially O(1) time.
Of course, Linear Logic reveals that many other facets regarding the efficiency of term evaluation are also deeply intertwined, and besides new solutions to evaluation steps and time-cost, Interaction Nets simultaneously act in a way which is local (allowing embarrassing parallelisation, and thus the whole possibility of HVM's GPU backend for arbitrary computation, albeit I believe, in progress) and also allows automatic erasure of unnecessary terms in large classes of terms (automatic garbage collection).
Together, the problems that Interaction Nets solve seem to altogether make the tradeoff between comprehensibility and efficiency an unnecessary choice.