Comment by behnamoh
5 months ago
speaking of shell pipelines, what is the "right" way of implementing pipes?
- Elixir: data |> process(12) puts data as the FIRST arg of process (before 12).
- Gleam: data |> process(12, _) puts data as the "hole" arg ("_") of process.
So far so good, but these approaches are mainly just more convenient function calls - i.e., they don't have fancy error checking in them. Then you have Haskell:
- Haskell: >>= "binds" actions to guarantee execution order (even for actions that don't depend on the previous action's output!). This is more fancy because it uses monads to encapsulate the computations at each step, and can shortcircuit on errors.
I’m not sure that |> operators are the right analogy, but fwiw:
Clojure does either first or last position depending on the operator, and it offers lightweight lambdas similar to your second option
The natural choice for a language like Haskell is final position: the rhs of the |> will be partially applied an |> has type a -> (a -> b) -> b
In R, things go in the last slot I think but most arguments on the right hand side would be passed as keywords so the ‘last’ slot would often be the first argument.
The whole point of Unix pipes is that execution is parallel so I’m not totally sure I get your point about guaranteeing execution order.
I think you're conflating "chaining function calls together" (aka "threading function calls") with unix pipelines, which are all about running separate programs in parallel & connecting their io streams together (with the kernel regulating the flow of data between them).
Threading functions together is basically about being able to write
rather than: