I recommend anyone interested in this to check the work of Lionel Parreaux, in particular SimpleSub, which is equivalent to MLsub but substantially simpler.
As it turns out, Dolan's main contribution wasn't the algorithm (which is overly complex, as proven by Parreaux's simpler implementation), but the type language - the insight that most subtyping constraints can be removed and/or simplified to simple union and intersection types, assuming certain simplifications of the type system (namely: positive/negative types, and distributivity of union/intersection over function types).
Parreaux is continuing to work on this problem, and has since removed one of the assumptions/simplifications (positive/negative types) in his work on MLstruct
Around every integer? Around every expression? On every let-bound function, on every where-bound function? Should the 'forall. a' included if 'a' is a polytype?
FWIW, I've seen many IDEs and plugins add inline type annotations so it reads as though they are there, but aren't. I think the first time I saw this was with the JetBrains IntelliJ Rust plugin in 2019.
I'm dimly aware that the Hindley-Milner system is closely related to System F and that System F has many notable exceptions on of the most significant of which is System F<: (System F-sub), which I thought was the subject of a number of inference papers in the 80s and 90s.
What's the difference between that work decades ago and the work from Stephen Dolan in 2016 cited in this post? Like, what's the thing that is demonstrated now that we didn't have like 30 years ago?
Briefly, algebraic data types add polarity to the types so that we don't have both S <= T and T <= S, which would imply equality. Instead we have S+ <= T- and T+ <= S-. The positive types represent a value source and the negative types are a sink. For example, in a function call, the args are negative and the return type is positive. Inside the function its formal parameters become positive and its return type is negative. Variables exist in an "ambipolar" state which can be both the source or the sink, depending on whether we're using it on the rhs or lhs of an assignment.
I recommend anyone interested in this to check the work of Lionel Parreaux, in particular SimpleSub, which is equivalent to MLsub but substantially simpler.
As it turns out, Dolan's main contribution wasn't the algorithm (which is overly complex, as proven by Parreaux's simpler implementation), but the type language - the insight that most subtyping constraints can be removed and/or simplified to simple union and intersection types, assuming certain simplifications of the type system (namely: positive/negative types, and distributivity of union/intersection over function types).
https://lptk.github.io/programming/2020/03/26/demystifying-m...
https://dl.acm.org/doi/10.1145/3409006
Parreaux is continuing to work on this problem, and has since removed one of the assumptions/simplifications (positive/negative types) in his work on MLstruct
https://github.com/hkust-taco/mlstruct
I like to have type annotations, yeah, they can look ugly, but I much rather know what something is rather than have to infer it by myself.
Makes reading code a lot easier if you know what you're doing.
Around every integer? Around every expression? On every let-bound function, on every where-bound function? Should the 'forall. a' included if 'a' is a polytype?
FWIW, I've seen many IDEs and plugins add inline type annotations so it reads as though they are there, but aren't. I think the first time I saw this was with the JetBrains IntelliJ Rust plugin in 2019.
I would rather see the type annotations too. This whole essay has a flimsy premise.
I'm dimly aware that the Hindley-Milner system is closely related to System F and that System F has many notable exceptions on of the most significant of which is System F<: (System F-sub), which I thought was the subject of a number of inference papers in the 80s and 90s.
What's the difference between that work decades ago and the work from Stephen Dolan in 2016 cited in this post? Like, what's the thing that is demonstrated now that we didn't have like 30 years ago?
https://en.wikipedia.org/wiki/System_F#System_F%3C:
Briefly, algebraic data types add polarity to the types so that we don't have both S <= T and T <= S, which would imply equality. Instead we have S+ <= T- and T+ <= S-. The positive types represent a value source and the negative types are a sink. For example, in a function call, the args are negative and the return type is positive. Inside the function its formal parameters become positive and its return type is negative. Variables exist in an "ambipolar" state which can be both the source or the sink, depending on whether we're using it on the rhs or lhs of an assignment.