Comment by qazxcvbnm
2 years ago
Thanks for the detailed breakdown of your analysis!
The case of erasing nodes after duplication 'unnecessarily' must be handled and cannot be dismissed - being much of the core use case of lazy evaluation.
From your breakdown it suggests that compilers to HVM use some heuristic which uses `&` immutable references on most data-like nodes, and lazy duplication on code-like nodes. I'm quite sure that still doesn't solve the issue of unnecessary duplications in the general case, but it does seem plausible that such measures suffice to slice the Gordian Knot in practice, similar to how you eliminated the 'oracle' with your superposition resolution - to very practically sidestep the uncommon, complicated case. Indeed, the distinction between code-like nodes and data-like nodes dictates very much of their behaviour during reduction, whereas in much of PLT literature and pure interaction nets, every node is 'code-like'.
I'm not sure if this is good enough in practice, if as I believe, HVM is meant to be a compiler target; I fear that good heuristics may not be enough, where the 'unnecessary' erase-after-duplicates arise from complex code paths, and may perhaps become apparent intermittently (but catastrophically) or only be clear on inspecting compilation output.
Looking forward to more of your work!
No, I mean that users should choose whether they use lazy cloning or immutable references when sharing arguments, just like you have the choice for several pointer types in Rust. I don't think compilers should make that decision, since I like to have full control over my program's runtime costs. When compiling a language like Haskell to HVM, we can just use references by default, and lazy cloning could be opt-in (using some newtype or similar).