← Back to context

Comment by haweemwho

2 years ago

Alright, let me be the skeptic here and ask. Honest question, not intending to just dismiss anybody outright. But I have questions.

I've looked up Victor Taelin and his HighOrderCo company that supposedly just raised $4.5M. So here we have this Brazilian guy who graduated university in 2017, and I don't mean that in a bad way. It's just a fact, a data point. Fresh ideas are always welcome. From his Twitter feed seems to be deeply into the crypto space. He is over- or at least heavily upselling this HVM project as the future of computing. Lots of fuzz about interaction nets and linear logic and that this is the next thing after Turing Machines and Lambda calculus. Ok, strong claims. It makes me a tad suspicious though. Outperforming GHC? Running a new chain that's not a chain so it's not a crypto currency, but somehow it's a chain, but it's formally verified, hm, ok. And how can he strongly claim this to be the superior thing if he just announced how to model complex numbers in his HVM project, and that GPT-4 helped him get this insight? I'd have expected Clifford Algebras to be basic knowledge for everybody working in this space.

I'm only superficially familiar with ETH, with Haskell's GHC, with Rust, with HOTT. But, I'd like to hear from people who know more than I do, and who are not already obvious fanboys of HighOrderCo with submarine PR interests -- how much weight should I put into this? Is it worth reading about, getting my head busy? I've spent an hour now trying to check up on things, but apart from a lot of fluff and quite a lot of technical jargon that went above my head, I'm left confused.

Is there something or is this just supposed to confuse me to get into the next scam?

Edit: Just to add to this. https://news.ycombinator.com/item?id=35344514 discusses an article written about the underlying tech. It's a terrible article on the level of a newcomer to programming who doesn't know much about functional programming and even admits as much. Victor Taelin comments with "Very well written and beautiful article." This must be a scam. I'll stop wasting my time on this until somebody manages to convince me otherwise.

Hi, I've spent quite an amount of time digging into papers on the computational models which Victor has linked to.

I do notice Victor's tendency to be hyperbolic and defensive, which is quite unfortunate, and I don't know or care much about the blockchain that Victor is also running. (By the way, I think that Victor's real name is probably Victor Maia rather than Taelin, as you can corroborate from a number of sources; it seems Taelin is just his handle on Github. It seems that Victor used to work at Ethereum Foundation with the title of Researcher https://bobsummerwill.com/ethereum-foundation-people/?)

On the other hand, the computational model that Victor's HVM is built on was invented by Yves Lafont, and strongly inspired by Jean Yves Girard (who is known for Linear Logic, Girard's paradox, and System-F (of which, according to Wikipedia "forming a theoretical basis for languages such as Haskell")). I can assure you that Lafont and Girard are logicians of the highest caliber, and indeed the promises that HVM intends to fulfill are the same notions envisioned by Girard and computer scientists working on Geometry of Interaction and Interaction Nets. If you spend a few minutes to consider the case on the README where HVM claims to outperform GHC (e.g. regarding Lambda Multiplication), it should not be hard to understand why it is possible in certain cases for HVM to outperform essentially anything else, and the potential that improvements in computational models can bring.

In any case, Victor has indeed put a very promising computational model to serious optimisation, and that alone is worth earnest commendation. Reading in the details on HVM will make it clear that although it works today, it is still very much a prototype in-progress. Personally, I think the computational challenges that still need to be tackled to bring Interaction Nets to life remain intractable (as do Asperti and Guerrini, in IIRC chapter 10 of The Optimal Implementation of Functional Programming Languages; Victor mentions basing his implementation on Asperti and Guerrini's work, which makes it surprising that Victor is surprised that optimal reduction has edge cases), but any serious effort in this direction is badly needed and should probably be encouraged. If VCs are putting their money behind bringing Girard's ideas to popular imagination, then perhaps for once they are doing good.

  • Thanks for the balanced feedback. It's not too far from the impression I got. I do take Lafont and Girard seriously and have greatest respect for them.

    • Just to further clarify on the reasons why I think that work into such computational models are badly needed; I think that comprehensible systems fundamentally possess a kind of impedance mismatch with natural forms of expression within von Neumann architecture (as noted elsewhere in the comments). For many years, functional programmers and computer scientists have looked forward to the immense performance potential of automatic parallelisation of pure terms over imperative operations, due to explicit declarations of data dependencies - but despite the passing of decades, it is yet to be fulfilled. Probably no one would consider functional languages for the most performance sensitive of tasks.

      As is well known in functional programming circles, mutation and non-referentially transparent operations make systems non-composable and incomprehensible. Pure, composable computation can generally be roughly modelled as terms in lambda calculus. However, elegant as lambda calculus may be as a computational model for computer scientists, to a rough approximation, CPU instructions resemble more of Turing machines. To a very rough approximation, we can take each CPU instruction to cost O(1) time, but on the other hand, lambda reduction takes a much more complex and variable cost depending on your term and reduction strategy, and is highly non-local. The potential of Interaction Nets lies in part, in allowing evaluation of lambda terms in essentially O(1) time.

      Of course, Linear Logic reveals that many other facets regarding the efficiency of term evaluation are also deeply intertwined, and besides new solutions to evaluation steps and time-cost, Interaction Nets simultaneously act in a way which is local (allowing embarrassing parallelisation, and thus the whole possibility of HVM's GPU backend for arbitrary computation, albeit I believe, in progress) and also allows automatic erasure of unnecessary terms in large classes of terms (automatic garbage collection).

      Together, the problems that Interaction Nets solve seem to altogether make the tradeoff between comprehensibility and efficiency an unnecessary choice.