I've been attracted to this - along with 2D cellular automata - a bit like a moth to a flame for some time. I find the little machine visualisations mesmerising, the heavily parenthesized Greek representation charming (they look like standing orders written in an alien language, looking for all the world like space invaders) and the tiny code sizes magical.
But I can't quite wrap my mind around the core concepts and internalize them into a mental model. It's too different from the simple world of imperative C or scripting languages I guess I call home. So I'm left watching das blinkenlights from the outside, as my attention span chokes on the layers of computer science incorporated into typical explanations. *shrug*
I'd be very interested if anyone knows of an ELI5-style alternate path I could walk to break each of the concepts down one at a time. (I ask because I think this is (currently) the kind of thing I think ChatGPT would struggle to present as effectively as a human.)
The best way to wrap your mind around the core concept and internalize them into a mental model is writing an interpreter yourself. It's been abundantly clear to me since young that for anything involving math, you don't internalize it if you merely passively let someone else explain it, whether that's reading a textbook/blog or attending a professor's lecture or watching a YouTube video. You have to do the exercises.
Lambda calculus is the same. You can easily define the data structure to represent a program in untyped lambda calculus and then write an interpreter for it. Then go implement some interesting concepts such as the Y combinator or the Omega combinator. If you find lambda calculus too difficult to do things like arithmetic or linked lists, you don't have to stick with Church numerals or Scott encodings. Just introduce regular natural numbers and lists as ground types; when you later have a better understanding, write programs to transform regular numerals from and to Church numerals and bask in the fact that they are isomorphic.
I had the luck of reading that quote while I was an undergrad. I did not actually pursue a career in pure math, but it certainly helps me every time I want to understand some math in order to apply it. (Lambda calculus, type systems, Fourier/Laplace/z-transform, ...)
I think the most ELI5 approach is Alligator Eggs [0] which was built for 8-year-olds to play like a game. You can find a lot of the advanced concepts outside of the core also explained in terms of Alligator Eggs and some software visualizers, but there's also something to be said about hands on learning and about printing it out yourself on some cardstock or cardboard paper, cutting it out, personalizing it with crayons, and playing it with a child or at least your inner child.
> Whilst it certainly isn't a contender for modern programming languages
Yet all that separates the λ-calculus from one modern programming language, Haskell, is a layer of syntactic sugar on top, and a runtime that effectuates
its pure IO actions. We can in fact compile Haskell programs using just stdin/stdout for IO into terms of the untyped lambda calculus, as wonderfully demonstrated in Ben Lynn's IOCCC entry [1], or equivalently, into BLC programs.
"To mock a mockingbird" (https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird) is a wonderful introduction to something that's sufficiently more abstract than lambda calculus that you'll probably find the latter pleasingly concrete afterwards, but it takes only tiny, bite-sized steps (err, mixed metaphors) to get you to understanding.
Spending time with pure functional programming (languages like Haskell) will open up these concepts in a real-world programming environment. Obviously languages like Haskell are more complex than this, but they're all fundamentally based on lambda calculus. That could be the first step away from the imperative thinking you describe.
(That was certainly my way in to this world anyway!)
sorry for not providing explanations, but check this out:
https://news.ycombinator.com/item?id=42256394 (The Art and Mathematics of Genji-Ko
172 points, by olooney, 49 days ago, 10 comments)? very tangentially related, but also mesmerizing stuff, i think..
Author here. If anyone wants to see an example of an awesome program you can run on the 520 byte version of my lambda calculus virtual machine (Blc) then check out https://github.com/woodrush/lambdalisp If you run the command in that project, it'll download my VM from the blog post, build a 20kb lambda expression you can pipe into it, and BOOM a fully object-oriented LISP REPL will appear in your terminal. It's like magic. For an example expression, try typing (+ 2 3) and hit enter. Then type (let ((a 2) (b 3)) (+ a b)) and hit enter. You need an x86 linux machine to do this right now.
Justine - thanks so much for all these amazing projects. You're an inspiration.
One thing I saw you write recently is that chasing the newest fads is a distraction. That makes sense, but if you don't mind me asking, what do you think one should focus on instead? Which are the classic languages, tools, mindsets, and CS concepts that one must master?
I don't remember saying that. You might be thinking about https://justine.lol/ape.html where I said we should be focusing on the old things that matter which aren't going away, like UNIX magic numbers, C libraries, and computer science. But I've got nothing against the new. I think AI for example is exciting. Ultimately you should focus on whatever summons your passion and curiosity. Since if you're tapped into that divine energy within, then you can make anything work, and others will agree. Even if it's just boring old numbers.
Does anyone have a gentle introduction on binary λ-calculus? I've tried reading other pages on this site but it goes a bit too fast for me understand what the hell is going on with it.
Please note that I'm not an expert in lambda calculus, just a curious nerd and it won't explain everything, like the reductions, combinators and so on. But there I explain how to implement simple types (int, boolean, pairs and lists) using Church encoding, let expressions and recursion using the Y combinator (yay, I finally used the expression "Y combinator" on HN!). Everything that we need to implement a quicksort (which is a relatively complex algorithm) using the almost nothing that we have in lambda calculus.
Another point is that it's all implemented in Python, using the Python notation instead of the lambda calculus notation, so you can run the code in your machine and play with the examples
In case the other answers aren't sufficient, the first step is to understand the λ-calculus[0]. Then, De Bruijn indices[1]. Now, observe that the language we have only has (you need familiarity with the λ-calculus to understand those terms (… pun unintended)) 1/ applications, 2/ abstractions, 3/ integers representing variables [introduced by abstractions]. For example:
(λ (λ 1 (λ 1)) (λ 2 1))
Binary λ-calculus is then merely about finding a way to encode those three things in binary; here's how the author does it (from the blog post):
00 means abstraction (pops in the Krivine machine)
01 means application (push argument continuations)
1...0 means variable (with varint de Bruijn index)
The last one isn't quite clear, but she gives examples in `compile.sh`:
To check your understanding, you may want to try to manually convert some λ-expressions using those encoding rules, starting with simple ones, and check what you have with what `compile.sh` yields.
I never would have been able to understand lambda calculus well enough to write the blog post if I started with [0]. I say just pull out the shell and start coding things. Then read [0] later to appreciate things on a deeper level.
The 43-byte implementation might define only a subset of the functionality provided by the full VM, enough to "bootstrap" into the full implementation, most likely.
In fact, if the VM is Turing complete, it can theoretically emulate any computation, including its full implementation, even from a small subset of operations.
The point is that the 43-byte implementation does not need to encode the entire VM explicitly. For example, if the VM has built-in primitives for looping, branching, and memory management, the minimal implementation can leverage these to rebuild the remaining functionality.
Very interesting. Is it possible to imagine implementing an OS based on this ? I have been interested by lambda calculus for a while (implemented a lambda calculus interpreter in haskell) and was always wondering if people were working on "functional computers" and if it makes sense
Does it handle alpha-renaming? Most of the golfed interpreters I've seen over the years does not and hence does not handle the full untyped lambda calculus.
It doesn't work on modern Apple Silicon macs with M1-4 chips (although Rosetta [1] might be able to handle it somehow), but it works fine on my older x86 based iMac.
I've been attracted to this - along with 2D cellular automata - a bit like a moth to a flame for some time. I find the little machine visualisations mesmerising, the heavily parenthesized Greek representation charming (they look like standing orders written in an alien language, looking for all the world like space invaders) and the tiny code sizes magical.
But I can't quite wrap my mind around the core concepts and internalize them into a mental model. It's too different from the simple world of imperative C or scripting languages I guess I call home. So I'm left watching das blinkenlights from the outside, as my attention span chokes on the layers of computer science incorporated into typical explanations. *shrug*
I'd be very interested if anyone knows of an ELI5-style alternate path I could walk to break each of the concepts down one at a time. (I ask because I think this is (currently) the kind of thing I think ChatGPT would struggle to present as effectively as a human.)
The best way to wrap your mind around the core concept and internalize them into a mental model is writing an interpreter yourself. It's been abundantly clear to me since young that for anything involving math, you don't internalize it if you merely passively let someone else explain it, whether that's reading a textbook/blog or attending a professor's lecture or watching a YouTube video. You have to do the exercises.
Lambda calculus is the same. You can easily define the data structure to represent a program in untyped lambda calculus and then write an interpreter for it. Then go implement some interesting concepts such as the Y combinator or the Omega combinator. If you find lambda calculus too difficult to do things like arithmetic or linked lists, you don't have to stick with Church numerals or Scott encodings. Just introduce regular natural numbers and lists as ground types; when you later have a better understanding, write programs to transform regular numerals from and to Church numerals and bask in the fact that they are isomorphic.
Mathematics is not a spectator sport.
I had the luck of reading that quote while I was an undergrad. I did not actually pursue a career in pure math, but it certainly helps me every time I want to understand some math in order to apply it. (Lambda calculus, type systems, Fourier/Laplace/z-transform, ...)
I agree that you have to do the exercises instead of expecting others to explain and walk you through it.
https://www.youtube.com/watch?v=LXhsutNKhec
Just one programming book was able to help my son, who is 12 yrs old, learn lambda calculus and write a meta circular evaluator.
I think the most ELI5 approach is Alligator Eggs [0] which was built for 8-year-olds to play like a game. You can find a lot of the advanced concepts outside of the core also explained in terms of Alligator Eggs and some software visualizers, but there's also something to be said about hands on learning and about printing it out yourself on some cardstock or cardboard paper, cutting it out, personalizing it with crayons, and playing it with a child or at least your inner child.
[0] https://worrydream.com/AlligatorEggs/
It's too basic for what you need but the video from eyesomorphic [1], is a wonderful conceptual introduction
[1] https://www.youtube.com/watch?v=ViPNHMSUcog
> Whilst it certainly isn't a contender for modern programming languages
Yet all that separates the λ-calculus from one modern programming language, Haskell, is a layer of syntactic sugar on top, and a runtime that effectuates its pure IO actions. We can in fact compile Haskell programs using just stdin/stdout for IO into terms of the untyped lambda calculus, as wonderfully demonstrated in Ben Lynn's IOCCC entry [1], or equivalently, into BLC programs.
[1] https://www.ioccc.org/2019/lynn/index.html
4 replies →
video author is using 3b1b's manim (https://github.com/3b1b/manim). wonderful presentation.
"To mock a mockingbird" (https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird) is a wonderful introduction to something that's sufficiently more abstract than lambda calculus that you'll probably find the latter pleasingly concrete afterwards, but it takes only tiny, bite-sized steps (err, mixed metaphors) to get you to understanding.
You might find my practical introduction to lambda calculus and combinatory logic based on javascript helpful. [1]
Its mostly based on other introductory resources but I tried to write it from a practical step by step perspective I found most useful for myself.
[1]: https://static.laszlokorte.de/combinators/
If you have some time to spare, read SICP [0] and do the exercises :) (probably easiest to use DrRacket [1] as the interpreter)
[0] https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
[1] https://docs.racket-lang.org/sicp-manual/SICP_Language.html
Spending time with pure functional programming (languages like Haskell) will open up these concepts in a real-world programming environment. Obviously languages like Haskell are more complex than this, but they're all fundamentally based on lambda calculus. That could be the first step away from the imperative thinking you describe.
(That was certainly my way in to this world anyway!)
sorry for not providing explanations, but check this out: https://news.ycombinator.com/item?id=42256394 (The Art and Mathematics of Genji-Ko 172 points, by olooney, 49 days ago, 10 comments)? very tangentially related, but also mesmerizing stuff, i think..
Author here. If anyone wants to see an example of an awesome program you can run on the 520 byte version of my lambda calculus virtual machine (Blc) then check out https://github.com/woodrush/lambdalisp If you run the command in that project, it'll download my VM from the blog post, build a 20kb lambda expression you can pipe into it, and BOOM a fully object-oriented LISP REPL will appear in your terminal. It's like magic. For an example expression, try typing (+ 2 3) and hit enter. Then type (let ((a 2) (b 3)) (+ a b)) and hit enter. You need an x86 linux machine to do this right now.
Justine - thanks so much for all these amazing projects. You're an inspiration.
One thing I saw you write recently is that chasing the newest fads is a distraction. That makes sense, but if you don't mind me asking, what do you think one should focus on instead? Which are the classic languages, tools, mindsets, and CS concepts that one must master?
Hi troad, I read your book: https://justine.lol/sectorlisp2/troades.html
I don't remember saying that. You might be thinking about https://justine.lol/ape.html where I said we should be focusing on the old things that matter which aren't going away, like UNIX magic numbers, C libraries, and computer science. But I've got nothing against the new. I think AI for example is exciting. Ultimately you should focus on whatever summons your passion and curiosity. Since if you're tapped into that divine energy within, then you can make anything work, and others will agree. Even if it's just boring old numbers.
Does anyone have a gentle introduction on binary λ-calculus? I've tried reading other pages on this site but it goes a bit too fast for me understand what the hell is going on with it.
I don't know if it will work for you, but I wrote a Quicksort using lambda calculus in Python, and I explained the process of writing it here:
https://lucasoshiro.github.io/software-en/2020-06-06-lambdas...
Please note that I'm not an expert in lambda calculus, just a curious nerd and it won't explain everything, like the reductions, combinators and so on. But there I explain how to implement simple types (int, boolean, pairs and lists) using Church encoding, let expressions and recursion using the Y combinator (yay, I finally used the expression "Y combinator" on HN!). Everything that we need to implement a quicksort (which is a relatively complex algorithm) using the almost nothing that we have in lambda calculus.
Another point is that it's all implemented in Python, using the Python notation instead of the lambda calculus notation, so you can run the code in your machine and play with the examples
Sorry, I meant binary λ-calculus specifically. I can't quite wrap my head around what the hell it even does with its I/O.
1 reply →
In case the other answers aren't sufficient, the first step is to understand the λ-calculus[0]. Then, De Bruijn indices[1]. Now, observe that the language we have only has (you need familiarity with the λ-calculus to understand those terms (… pun unintended)) 1/ applications, 2/ abstractions, 3/ integers representing variables [introduced by abstractions]. For example:
Binary λ-calculus is then merely about finding a way to encode those three things in binary; here's how the author does it (from the blog post):
The last one isn't quite clear, but she gives examples in `compile.sh`:
To check your understanding, you may want to try to manually convert some λ-expressions using those encoding rules, starting with simple ones, and check what you have with what `compile.sh` yields.
[0]: https://www.irif.fr/~mellies/mpri/mpri-ens/biblio/Selinger-L...
[1]: https://en.wikipedia.org/wiki/De_Bruijn_index
I never would have been able to understand lambda calculus well enough to write the blog post if I started with [0]. I say just pull out the shell and start coding things. Then read [0] later to appreciate things on a deeper level.
1 reply →
I found this helpful https://brilliant.org/wiki/lambda-calculus/
Lisp is fairly similar and easy to pick up
"our 521 byte virtual machine is expressive enough to implement itself in just 43 bytes" whaat!
The 43-byte implementation might define only a subset of the functionality provided by the full VM, enough to "bootstrap" into the full implementation, most likely.
In fact, if the VM is Turing complete, it can theoretically emulate any computation, including its full implementation, even from a small subset of operations.
The point is that the 43-byte implementation does not need to encode the entire VM explicitly. For example, if the VM has built-in primitives for looping, branching, and memory management, the minimal implementation can leverage these to rebuild the remaining functionality.
My IOCCC entry [1] explains exactly what the 43-byte program is. It's a self-interpreter for BLC8, the byte based version of Binary Lambda Calculus.
The 521 byte interpreter on the other hand is written in x86 assembly, a language much less suitable for writing BLC8 interpreters than BLC8 itself.
Btw, with my latest lambda compiler, the BLC8 self interpreter is only 42 bytes:
[1] https://www.ioccc.org/2012/tromp/
3 replies →
Discussed at the time (but before it shrank):
Lambda Calculus in 400 Bytes - https://news.ycombinator.com/item?id=30493713 - Feb 2022 (63 comments)
I feel like I've accidentally stumbled into /r/VXJunkies with some of the terminology being thrown around in here.
>nine="λλ [1 [1 [1 [1 [1 [1 [1 [1 [1 0]]]]]]]]]"
Looks like Brainfuck in disguise :)
Anytime i see a Justine post it’s a banger — i had not read this one.
Very interesting. Is it possible to imagine implementing an OS based on this ? I have been interested by lambda calculus for a while (implemented a lambda calculus interpreter in haskell) and was always wondering if people were working on "functional computers" and if it makes sense
Cheers,
Does it handle alpha-renaming? Most of the golfed interpreters I've seen over the years does not and hence does not handle the full untyped lambda calculus.
Binary Lambda Calculus uses de-Bruijn indices [1], thereby avoiding the need for alpha renaming.
[1] https://en.wikipedia.org/wiki/De_Bruijn_index
Does not work on mac:
It doesn't work on modern Apple Silicon macs with M1-4 chips (although Rosetta [1] might be able to handle it somehow), but it works fine on my older x86 based iMac.
[1] https://en.wikipedia.org/wiki/Rosetta_(software)
No it does not (I opened the x86 version of the terminal with rosetta and run the commands and get the same error).
2 replies →
Moar upboats plz!