← Back to context

Comment by KPGv2

19 days ago

Right. I don't know how many times I've been exasperated by how monads are perceived as difficult.

Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.

Technically it's also an applicative functor, but at the end of the day, that gives us a few trivial things:

- a constructor (i.e., a way to put something inside your monad, exactly how `[1]` constructs a list out of a natural number)

- map (everyone understands this bc we use them with lists constantly)

- ap, which is basically just "map for things with more than one parameter"

Monads are easy. But when you tell someone "well it's a box and you can unwrap it and modify things with a function that also returns a box, and you unwrap that box take the thing out and put it inside the original box—

No. It is a flatmappable. That's it. Can you flatmap a list? Good. Then you already can use the entirety of monad-specific properties.

When you start talking about Maybe, Either, etc. then you've moved from explaining monads to explaining something else.

It's like saying "classes are easy" and then someone says "yeah well what about InterfaceOrienterMethodContainerArrangeableFilterableClass::filter" that's not a class! That's one method in a specific class. Not knowing it doesn't mean you don't understand classes. It just means you don't have the standard library memorized!

It's also important to note that in Haskell and other functional programming languages, there is no implied order of operations. You need a Monad type in order to express that certain things are supposed to happen after other things. Monads can also express that certain things happen "in between" two operations, which is why we have different kinds of Monads and mathematical axioms of what they're all supposed to do.

Outside of FP however, this seems really stupid. We're used to operations that happen in the order you wrote them in and function applications that just so happen to also print things to the screen or send bits across the network. If you live in this world, like most people do, then "flatmap" is a good metaphor for Monads because that's basically all they do in an imperative language[1].

Well, that, and async code. JavaScript decided to standardize on a Monad-shaped "thenable" specification for representing asynchronous processes, where most other programming languages would have gone with green threads or some other software-transparent async mechanism. To be clear, it's better than the callback soup you'd normally have[0], but working with bare Thenables is still painful. Just like working with bare Monads - which is why Haskell and JavaScript both have syntax to work around them (await/async, do, etc).

Maybe/Either get talked about because they're the simplest Monads you can make, but it makes Monads sound like a spicy container type.

[0] The FP people call this "continuation-passing style"

[1] To be clear, Monads don't have to be list-shaped and most Monads aren't.

  • There is an implied order of operations in Haskell. Haskell always reduces to weak head normal form. This implies an ordering.

    Monads have nothing to do with order (they follow the same ordering as Haskell's normalization guarantees).

    > JavaScript decided to standardize on a Monad-shaped "thenable" specification for representing asynchronous processes,

    Its impossible for something to be monad shaped. All asynchronous interfaces form a monad whether you decide to follow the Haskell monad type class or decide to do something else. They're all isomorphic and form a monad. Any model of computation forms a monad.

    Assembly language quite literally forms a category over the monoid of endo functors.

    Jacquard loom programming also forms a category over the monoid of endo functors because all processes that sequence things with state form such a thing, whether you know that or not.

    It's like claiming the Indians invented numbers to fit the addition algorithm. Putting the cart before the horse, because all formations of the natural numbers form a natural group/ring with addition and multiplication formed the standard way (they also all form separate groups and rings, that we barely ever use).

    • > All asynchronous interfaces form a monad whether you decide to follow the Haskell monad type class or decide to do something els

      JS's then is categorically not a monad because it doesn't follow the monad laws.

      fn1 : a -> Promise<b>

      fn2 : b -> c

      fn3 : b -> Promise<c>

      With JavaScript, composing fn1 and fn2 with then gives you a -> Promise<c>. So then is isomorphic to map.

      With JavaScript, composing fn1 and fn3 with then gives you a -> Promise<c>. So then is isomorphic to flatmap.

      Therefore, with JavaScript, map is isomorphic to flatmap. Which obviously violates monad laws.

      There's a rather famous Github issue where someone points this out in the issue tracker for `then` development, and one of the devs in charge of then...leaves responses for posterity.

      1 reply →

  • > Maybe/Either get talked about because they're the simplest Monads you can make, but it makes Monads sound like a spicy container type.

    Actually "spicy container type" is maybe a better definition of Monad than you may think. There's a weird sort of learning curve for Monads where the initial reaction is "it's just a spicy container type", you learn a bit and get to "it is not just a spicy container type", then eventually you learn a lot more and get to "sure fine, it's just a spicy container type, but I was wrong about what 'container' even means" and then settle back down to "it's a spicy container type, lol".

    "It's a spicy container type" and "it's anything that is flatmappable" are two very related simplifications, if "container" is a good word for "a thing that is flatmappable". It's a terrible tautological definition, but it's actually not as bad of a definition as it sounds. (Naming things is hard, especially when you get way out into mathematical abstractions land.)

    There are flatmappable things that don't have anything to do with ordering or sequencing. Maybe is a decent example: you only have a current state, you have no idea what the past states were or what order they were in.

    Flatmappable things are generally (but not always) non-commutative: if you flatmap A into B you get a different thing than if you flatmap B into A. That can represent sequencing. With a Promise `A.then(() => B)` is different sequence than `B.then(() => A)`. But that's as much "domain specific" to the Promise Monad and what its flatmap operation is (which we commonly call `then` to make it a bit more obvious what its flatmap operation does, it sequences; A then B) than anything fundamental to a Monad. The fundamental part is that it has a flatmap operator (or bind or then or SelectMany or many other language or domain-specific names), not anything to do with what that flatmap operator does (how it is implemented).

  • > You need a Monad type in order to express that certain things are supposed to happen after other things

    This is the kind of explanation that drives me absolutely batshit crazy because it is fundamentally at odds with:

    > Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.

    So, I think I understand flatmap, assuming that this is what you mean:

    https://www.w3schools.com/Jsref/jsref_array_flatmap.asp

    But this has absolutely nothing to do with "certain things are supposed to happen after other things", and CANNOT POSSIBLY have anything to do with that. Flatmap is a purely functional concept, and in the context of things that are purely functional, nothing ever actually happens. That's the whole point of "functional" as a concept. It cleanly separates the result of a computation from the process used to produce that result.

    So one of your "simple" explanations must be wrong.

    • Because you're not used to abstract algebra. JavaScript arrays form a monad with flatmap as the join operator. There are multiple ways to make a monad with list like structures.

      And you are correct. Monads have nothing to do with sequencing (I mean any more than any other non commutative operator -- remember x^2 is not the same as 2^x)

      Haskell handles sequencing by reducing to weak head normal form which is controlled by case matching. There is no connection to monads in general. The IO monad uses case matching in its implementation of flatmap to achieve a sensible ordering.

      As for JavaScript flat map, a.flatMap(b).flatMap(c) is the same as a.flatMap(function (x) { return b(x).flatMap(c);}).

      This is the same as promises: a.then(b).then(c) is the same as a.then(function (x) { return b(x).then(c)}).

      Literally everything for which this is true forms a monad and the monad laws apply.

      3 replies →

People have different "aha" moments with monads. For me, it was realizing that something being a monad has to do with the type/class fitting the monad laws. If the monad laws hold for the type/class then you've got a monad, otherwise not.

So then when you look at List, Maybe, Either, et al. it's interesting to see how their conforming to the laws "unpacks" differently with respect to what they each do differently (what's happening to the data in your program), but the laws are just the same.

The reason this was an aha moment for me is that I struggled with wanting to understand a monad as another kind of thing"I understand what a function is, I understand what objects and primitive values are, but I don't get that List and Maybe and Either are the same kind of thing, they seem like totally different things!"

  • Yes, I 100% agree. But I want to mention something that isn't a disagreement, just a further nuance:

    1. my explanation of monad is sufficient for people who need to use them

    2. your explanation of monad is necessary for people who might want to invent new ones

    What I mean by this is that if you want to invent a new monad, you need to make sure your idea conforms to the monad laws. But if you're just going to consume existing monads, you don't need to know this. You only need to know the functions to work with a monad: flatmap (or map + flatten), ap(ply), bind/of/just. Everything else is specific to a given monad. Like an either's toOptional is not monadic. It's just turning Left _ into None and Right an into Some a.

    And needing to know these properties "work" is unnecessary, as their very existence in the library is pretty solid evidence that you can use them, haha.

Forget programming, everyday business and physics is monadic in function.

And if-then statements are functorial.

These are very general thought patterns.

  • > everyday business and physics is monadic in function.

    So?

    > And if-then statements are functorial.

    So?

    All the "this is hard" stuff around these ideas seems to focus on managing to explain what these things are but I found that to progress at the speed of reading (so, about as easy as anything can be) once it occurred to me to find explanations that used examples in languages I was familiar with, instead of Haskell or Haskell-inspired pseudocode.

    What I came out the other side of this with was: OK, I see what these are (that's incredibly simple, it turns out) and I even see how these ideas would be useful in Haskell and some similar languages, because they solve problems with and help one communicate about problems particular to those languages. I do not see why it matters for... anything else, unless I were to go out of my way to find reasons to apply these ideas (and why would I do that? And no, I don't find "to make your code more purely-functional" a compelling reason, I'm entirely fine with code I touch only selectively, sometimes engaging with or in any of that sort of thing).

    The "so?" is the part I found (and find) hard.

    • There is no 'so?' Haskell tends towards applicatives and monads because monads and applicatives are the preferences of haskellers. Just like JavaScript people may like dynamic typing, etc. These are design choices.

      By modeling various things as monads, you get the various principled monad extensions. Unlike normal programming where leaky abstractions are the expectation, using algebraic structures with principled laws means things just work.

      But this has nothing to do with monads in particular. Haskell's choice to do a lot with monoids provides a similar guarantee about things that combine . It's a preference. Nothing like monoids exist in other languages, because people are told they have to think with 'objects' of whatever.

      2 replies →

> Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.

Awesome! Now I understand.

> Technically it's also an applicative functor

Aaaand you've lost me. This is probably why people think monads are difficult. The explanations keep involving these unfamiliar terms and act like we need to already know them to understand monads. You say it's just a flatmappable, but then it's also this other thing that gives you more?

  • But words like "incapsulation" or "polymorphism" or even "autoincrement" also sound unfamiliar and scary to a young kid who encounters them the first time. But the kid learns their meaning along the way, in a desire to build their own a game, or something. The feeling that one already knows a lot, sort of enough, and it'd be painful and boring to learn another abstract thing is a grown-up problem :-\

    • Those words need definitions, but they can both be defined using words most people know.

      Casual attempts at defining Monads often just sweep a pile of confusion around a room for a while, until everything gets hidden behind whatever odd piece of furniture that is familiar to the person generating the definition. They then imagine they have cleared up the confusion, but it is still there.

      2 replies →

  • I mean people need to be familiar with mathematics. In mathematics things form things without having to understand them.

    For example, The natural numbers form a ring and field over normal addition and multiplication , but you don't need to know ring theory to add numbers..

    People need to stop worrying about not understanding things. No one understands everything.

    • Now imagine if every single explanation of natural numbers talked about rings and fields. Nobody ever just says "they're the counting numbers starting from one." A few of them might say, "they're the counting numbers starting from one, and they form a ring and field over addition and multiplication." And I might think, I understand the first part, but I'm not sure what the second part is and it sounds important, so maybe I still don't know what natural numbers are.

      I'm not worried, but it's amusing to see this person say it's so simple, and then immediately trample on it.

      1 reply →

  • I'm sorry, I wasn't clear. The "technically" was meant to signal "it doesnt' matter to you, but to pedants here who get off on saying "well ACKshually": I didn't forget that, it's just not relevant :D

    If you want a little more elucidation, what you need to know, unless you're aiming to be functional programming god, is that:

    - a monad is a FLATMAPPABLE - all monads are also applicative functors, which i will explain last bc it's kind of a twist on MAPPABLE - all applicative functors, and thus all monads, are functors, which are MAPPABLEs - an applicative functor is essentially a mappable for functions that take more than one parameter

    I think applicative functors are the hardest to grok because it's not immediately obvious why they're necessary. The type signature is strange, and it's like "why would I ever put a function inside a container??" I wrote a lot of functional code in Kotlin and TypeScript before I finally understood their utility. The effect of this was that a lot of awkward code became much cleaner.

    So let's begin with functor (i.e., a mappable):

    Container<Integer>

    if you have a function Integer to Text, a functor allows you to convert the Integer to Text using a function called `map`. We do this with arrays all the time in Python, JavaScript, etc. It's a very familiar concept, but we don't call it "functor" in those languages.

    BUT, what if you have

    Container<Integer>

    and the function you want to map with takes two parameters. A classic example is you want to use the Integer as the first argument of a constructor. Let's say Pair.

    So if Pair's constructor is: a -> a -> (a, a), you would first map Container<Integer> with PairConstructor. Now you have Container<Integer -> (Integer, Integer)>.

    To pass in the second Integer to finish constructing the tuple, you use the special property of applicative functors. This is often called "ap" (like "map" without the "m").

    ---

    Now, I would say the ACTUAL most important thing about applicative functors is this:

    Imagine if you had a list of words. You want to make an API call for each word. API calls are often modeled with the Async monad (which is also, as mentioned above, definitionally an applicative functor).

    But if you mapped [Word] with CallApi, you would end up with [Async ApiResult]. This models "a list of successful and unsuccessful API calls."

    But what if you wanted Async [ApiResult] instead? (One might say this is an attempt to model "all api calls successful, but if one api call fails, the whole operation is considered a failure."

    This is where applicative functors shine: pulling the applicative functor out of the container and wrapping the whole container. (There's more cool stuff to learn about the nature of this "container" but that'd be for another lesson, much like how you don't learn about primitives and interfaces on the same day in an OOP class.)

    Recall that constructing a list of N items would be

    a -> a -> a -> ... -> a (n times) -> [a]

    That looks an awful lot like one MAP followed by (n-1) APs, based on the discussion above! And that's exactly what it is.

    You can map the first api call and then ap the rest, and you end up going over the entire list, getting Async [ApiResult].

    Now, there are a lot of ways languages go about solving this kind of "fail if one of the operations fails rather than compile a list of all successes and failures."

    But the nice thing about using Functors, Monads, etc. is that you have a bunch of functions that work on these things, and they handle a ton of code so you don't have to.

    That collection of Words above? It's a list. Lists are a Traversable, and all Traversable have the following function:

    traverse: (a -> Applicative b) -> Traversable a -> Applicative Traversable b

    The above, the traversable is a list and applicative functor was apiCall, so your code is as simple as

    traverse apiCall listOfWords

    No juggling around anything. That's it. You know your result will be "list of successful results, or a failure."

    ---

    There are many more of these "type classes," and the real power comes from not needing to write a lot of code anymore because it's baked into the properties of the various type classes. Have a type that can be mapped to an order able type? Bam, now your type is order able and you never have to write a sort function for your type. Etc.