I feel like you could build a semester-long university course out of this single web page, and it would be a really valuable way to spend that semester.
It is inspiring in the traditional sense of the word, not in the Ted Talks sense of the word, as in it inspires me to blow the dust off old ideas I've had and get working on them and to set the quality bar high for myself.
I was about to mention SectorLisp as a somewhat related project when I noticed that it was written by the very same genius, not to mention Blinkenlights, which has provided many hours of delight. Incredible!
Even if I'am far from understand your paper in deep, I like the way you explore new ways in lambda-calculus, mainly I think following John Tromp's works. I'm exploring lambda-calculus with binary numbers of any size using a home-made language, http://lambdaway.free.fr/lambdawalks, in this wiki page, http://lambdaway.free.fr/lambdawalks/?view=oops6, and it's funny.
Your work is inspiring. Thank you.
I was thinking of the same and also that this is just a glimpse of what's possible and possibly just intended this way to raise the interest in those genuinely curious. Unfortunately I don't fully understand from just glancing over the article (sorry, my bad), but I understand the implications. And these are incredibly fascinating to me. I was always thinking of living-code (just like DNA) living through the packet-state.
Anyway I think abstracting away variables+lexer via sed and indexing is hot!
My mind is so rusty, but I can imagine constructing an elegant & expressive language using "category theory + lambda calculus" and oh my god... instead of computing a single result using binary encoding we can compute all results in parallel (array-based) by using a lossless compressor (similar to data-fusion).
A Finite State Entropy Encoder (FSE) with a multi-symbol alphabet (>2) can be used for this purpose. FSE is a very interesting development in ANS encoding (Asymmetric numeral systems). I need to put all these pieces together + self-modifying code. Because I don't have the feeling that this is a dumb idea at all.
Maybe infecting any device like the skynet code isn't that far of a stretch away? Disregarding any malicious intent, this is simply inspiring. Maybe I'm just too much of a nerd though to enjoy this a bit too much.. idk.
Slightly unrelated, but this thread is giving me a serious learning itch. Any book or online class recommendations on lambda calculus, combinators and that whole area of CS? Preferably written for the experienced programmer but very average mathematician.
I enjoyed The Implementation of Functional Programming Languages by Simon Peyton Jones. It's how you would implement a Haskell like language (well, Miranda like, but that's more or less the the closed source precursor to haskell) using lambda calculus. It goes through lambda calculus, type checking, combinators and super combinators, and G-machines.
Also check out the "sequel" https://www.cis.upenn.edu/~bcpierce/attapl/. Pierce wrote the first book whereas this one is a series of articles edited by Pierce. However Advanced Topics covers a few areas that I have been hard pressed to find covered in other places. Row types, dependent types, linear types, etc.
One really interesting development of the Lambda calculus is the De Bruijn index. It does away with variable names altogether. It solves the name collision problem and would be an interesting way to write hygienic macros.
Tromp's binary lambda calculus, which is what Justine is using, is based on de Bruijn indices.
It works, but it's very fiddly. My favourite way of solving the problem of names in the lambda calculus is to use Pitts-Gabbay nominal syntax. Wikipedia has a weak article on them, which I mean someday to fix up: https://en.wikipedia.org/wiki/Nominal_terms_(computer_scienc...
I love this. I love this person's mind. I've seen her (assuming that is the correct pronoun) stuff before, always fascinating.
And then I just discovered that for some unknown reason, this person has already blocked me on Twitter (I can recall exactly zero interactions with them, but I must have followed them already due to things like APE).
Not sure if she sees this but I'm sorry about whatever I said that I don't recall? Twitter username is same as this one here. Would love to see you in my feed again, will promise to keep big mouth shut
They're called "Lambda Diagrams". See the inventor's homepage [1], and some animations [2]. There's actually a long and obscure history of visualizations for lambda calculus. One of the originals is "To Dissect a Mockingbird" [3], which if I recall correctly, was the main thing on the internet in 1996.
You should also check out the rest of her work! It's all amazing, SectorLISP, Blinkenlights and APE (Actually Portable Executable) are the really well known ones, but many of the other projects are things many programmers would have as the centerpiece of their portfolio.
I remember memorizing how to solve (reduce? resolve? expand?) lambda calculus expressions by hand that went over multiple lines for my degree but I never had the faintest clue what it is and why I was learning this and quickly forgot how to do it.
Reduction (in particular "beta-reduction") is essentially 'calling a function with an argument' (just defined in a precise, mathematically-rigorous way). Since lambda calculus only has functions (AKA "lambdas"), "reducing an expression" is another way of saying "running/executing a program".
"Expansion" would be going the other way; e.g. introducing an abstraction layer which, when reduced/executed, would result in the original expression.
Can I just say this whole presentation (like the previous Lisp boot-sector one) is just astonishingly well put-together, on top of the tour-de-force achievement.
The blc.S didn’t build for me using `-oformat:binary` . Changing to `--oformat=binary` it compiled, but then get core dumps. Running Linux x86_64 up to date Ubuntu 20.04.4 .
for cc in cc gcc clang
do echo "## $cc"
$cc -no-pie -static -nostdlib -o blc -Wl,--oformat=binary blc.S || exit 1
{ printf 0010; printf 0101; } | ./blc; echo
done
IIUC, this doesn't necessarily translate to more efficient programming, just because the program size is small. It's elegant. Just wondering about possible applications. Maybe I'm not understanding how currying works.
Assuming you verified that it's this particular 400 byte interpreter,
which is as sandboxed as it gets, then the worst that can happen is hogging all your memory and cpu.
Given SPECTRE and rowhammer, I just don't believe that. This gives attackers full degrees of freedom to reactively search for a crack in the process space. With a normal statically compiled decoder, as an attacker you have a certain degree of freedom, but it is far less (unless one of your formats is a programmable environment, e.g. NGO's iOS exploit).
I, for one, would not be comfortable having content distributed as a blob that runs as a virtual machine that builds the software that decodes the content. Justine has made this practical. I can see the appeal. I just worry that we're trading in our jpgs for exes in a sandbox, and I'm not sure its worth the risk.
Should be trivial to add running off the end into into native code, presumably after decompressing it. Process is: 0. start a decompressor; 1. decompress a decryptor; 2. call that to decrypt a composite payload; 3. run off the end into a better, native decompressor to decompress the real payload, and write it out or; 4. run into that.
I.e. it does not suffice to verify that the header is really a lambda calculus evaluator.
I feel like you could build a semester-long university course out of this single web page, and it would be a really valuable way to spend that semester.
Agreed. This is sheer brilliance.
Technical work like this borders on art.
It is inspiring in the traditional sense of the word, not in the Ted Talks sense of the word, as in it inspires me to blow the dust off old ideas I've had and get working on them and to set the quality bar high for myself.
I was about to mention SectorLisp as a somewhat related project when I noticed that it was written by the very same genius, not to mention Blinkenlights, which has provided many hours of delight. Incredible!
Anyone interested in BLC can find another introduction which I found personally helpful here: https://tromp.github.io/cl/Binary_lambda_calculus.html
Indeed; that's one of the 3 references listed at the bottom of the article:
> See Also
> https://tromp.github.io/cl/Binary_lambda_calculus.html
> https://www.ioccc.org/2012/tromp/hint.html
> https://github.com/tromp/AIT
Even if I'am far from understand your paper in deep, I like the way you explore new ways in lambda-calculus, mainly I think following John Tromp's works. I'm exploring lambda-calculus with binary numbers of any size using a home-made language, http://lambdaway.free.fr/lambdawalks, in this wiki page, http://lambdaway.free.fr/lambdawalks/?view=oops6, and it's funny. Your work is inspiring. Thank you.
This is the best possible answer to my question at
https://news.ycombinator.com/item?id=29631288
Justine and making things ridiculously small. Maybe off-topic but seriously impressive projects by the author, consistently. It's amazing.
Everything she touches is gold. Also: she would be the most fearsome cybercriminal I can imagine.
I was thinking of the same and also that this is just a glimpse of what's possible and possibly just intended this way to raise the interest in those genuinely curious. Unfortunately I don't fully understand from just glancing over the article (sorry, my bad), but I understand the implications. And these are incredibly fascinating to me. I was always thinking of living-code (just like DNA) living through the packet-state.
Anyway I think abstracting away variables+lexer via sed and indexing is hot!
My mind is so rusty, but I can imagine constructing an elegant & expressive language using "category theory + lambda calculus" and oh my god... instead of computing a single result using binary encoding we can compute all results in parallel (array-based) by using a lossless compressor (similar to data-fusion).
A Finite State Entropy Encoder (FSE) with a multi-symbol alphabet (>2) can be used for this purpose. FSE is a very interesting development in ANS encoding (Asymmetric numeral systems). I need to put all these pieces together + self-modifying code. Because I don't have the feeling that this is a dumb idea at all.
Maybe infecting any device like the skynet code isn't that far of a stretch away? Disregarding any malicious intent, this is simply inspiring. Maybe I'm just too much of a nerd though to enjoy this a bit too much.. idk.
Slightly unrelated, but this thread is giving me a serious learning itch. Any book or online class recommendations on lambda calculus, combinators and that whole area of CS? Preferably written for the experienced programmer but very average mathematician.
This is a classic book (related to lambda calculus):
https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird
This is a very cool article on lambda calculus that relates to it:
https://dkeenan.com/Lambda/
I haven't read the book, but the description gives me strong diamond age vibes. Is it somewhat like the described book?
I enjoyed The Implementation of Functional Programming Languages by Simon Peyton Jones. It's how you would implement a Haskell like language (well, Miranda like, but that's more or less the the closed source precursor to haskell) using lambda calculus. It goes through lambda calculus, type checking, combinators and super combinators, and G-machines.
[] - https://www.microsoft.com/en-us/research/publication/the-imp...
I'd say https://www.cis.upenn.edu/~bcpierce/tapl/ is the ultimate bible on this topic.
During my semester we only went through a few of its chapters but it was very enjoyable in my opinion.
Also check out the "sequel" https://www.cis.upenn.edu/~bcpierce/attapl/. Pierce wrote the first book whereas this one is a series of articles edited by Pierce. However Advanced Topics covers a few areas that I have been hard pressed to find covered in other places. Row types, dependent types, linear types, etc.
Good workshop/lecture in PyCon by David Beazley https://youtu.be/pkCLMl0e_0k
I wrote https://computationbook.com/ for that audience.
An Introduction to Functional Programming Through Lambda Calculus by Greg Michaelson really spells out the lambda calculus.
Regarding combinators the famous intro is the latter half of Raymond Smullyan's To Mock a Mockingbird.
Ha, Greg was my lecturer at Heriot-Watt for a few classes. I revisited his book last year during lockdown and would recommend it.
If you’re starting from zero this old classic could serve as an appetiser: https://tomstu.art/programming-with-nothing
The Informal Description section on Wikipedia is good for more background on Lambda Calculus:
https://en.m.wikipedia.org/wiki/Lambda_calculus
One really interesting development of the Lambda calculus is the De Bruijn index. It does away with variable names altogether. It solves the name collision problem and would be an interesting way to write hygienic macros.
[1] https://en.m.wikipedia.org/wiki/De_Bruijn_index
Tromp's binary lambda calculus, which is what Justine is using, is based on de Bruijn indices.
It works, but it's very fiddly. My favourite way of solving the problem of names in the lambda calculus is to use Pitts-Gabbay nominal syntax. Wikipedia has a weak article on them, which I mean someday to fix up: https://en.wikipedia.org/wiki/Nominal_terms_(computer_scienc...
2 replies →
I love this. I love this person's mind. I've seen her (assuming that is the correct pronoun) stuff before, always fascinating.
And then I just discovered that for some unknown reason, this person has already blocked me on Twitter (I can recall exactly zero interactions with them, but I must have followed them already due to things like APE).
Not sure if she sees this but I'm sorry about whatever I said that I don't recall? Twitter username is same as this one here. Would love to see you in my feed again, will promise to keep big mouth shut
+1
Enjoy your new sponsorship, lol. Now go forth and continue to slay bits
1 reply →
Computerphile has a great video on Lambda Calculus https://www.youtube.com/watch?v=eis11j_iGMs
Does anyone know how the subroutine diagrams are generated, or have more info about these graphs in general?
They're called "Lambda Diagrams". See the inventor's homepage [1], and some animations [2]. There's actually a long and obscure history of visualizations for lambda calculus. One of the originals is "To Dissect a Mockingbird" [3], which if I recall correctly, was the main thing on the internet in 1996.
[1] https://tromp.github.io/ [2] https://www.youtube.com/playlist?list=PLi8_XqluS5xc7GL-bgVrx...
[3] https://dkeenan.com/Lambda/
Thank you!
Build https://github.com/tromp/AIT and you can use ./blc Pbm foo.lam >foo.pbm; convert foo.pbm -interpolate Nearest -filter point -resize 800% foo.png
Justine, this is one of the coolest things I’ve ever seen! I’m flabbergasted. Are you a wizard?
she surely is! this so far is the best thing I’ve seen on internet today.
*She
Unless I've missed something.
You should also check out the rest of her work! It's all amazing, SectorLISP, Blinkenlights and APE (Actually Portable Executable) are the really well known ones, but many of the other projects are things many programmers would have as the centerpiece of their portfolio.
queen justine
I remember memorizing how to solve (reduce? resolve? expand?) lambda calculus expressions by hand that went over multiple lines for my degree but I never had the faintest clue what it is and why I was learning this and quickly forgot how to do it.
> solve (reduce? resolve? expand?)
Reduction (in particular "beta-reduction") is essentially 'calling a function with an argument' (just defined in a precise, mathematically-rigorous way). Since lambda calculus only has functions (AKA "lambdas"), "reducing an expression" is another way of saying "running/executing a program".
"Expansion" would be going the other way; e.g. introducing an abstraction layer which, when reduced/executed, would result in the original expression.
I am so awestruck, I even quit doomscrolling.
Can I just say this whole presentation (like the previous Lisp boot-sector one) is just astonishingly well put-together, on top of the tour-de-force achievement.
The blc.S didn’t build for me using `-oformat:binary` . Changing to `--oformat=binary` it compiled, but then get core dumps. Running Linux x86_64 up to date Ubuntu 20.04.4 .
Any suggestions?
Sorry about that!
Just tested myself on Linux with both GCC and Clang. Whatever you do, do not use any linker except big deal.
Yes! Thank you. On to exploring!
It looks like this it is great for FPGA.
Great for naked transistors.
IIUC, this doesn't necessarily translate to more efficient programming, just because the program size is small. It's elegant. Just wondering about possible applications. Maybe I'm not understanding how currying works.
Author here. Love SectorLambda for its mathematical beauty. If you're hungry for more efficient practical computation, check out my C language projects like Cosmopolitan Libc and Redbean. For instance, rather than processing data at 16kbps it's able to do things like crc32 at 22gBps https://github.com/jart/cosmopolitan/blob/d6a039821f927cc81b... and memset goes 126gBps https://github.com/jart/cosmopolitan/blob/d6a039821f927cc81b...
Awesome stuff. Thank you very much for the follow-up as well.
Is there any interest in a Forth-style approach?
Indeed I wish there was a video of someone explaining this article on a whiteboard/chalkboard.
Michael Penn, Burkard Polster, Matt Parker, or one of the Numberphile guys?
>you could prefix the compressed file with this 400 byte interpreter to get autonomous self-extracting archives that anyone can use.
What could possibly go wrong?
Assuming you verified that it's this particular 400 byte interpreter, which is as sandboxed as it gets, then the worst that can happen is hogging all your memory and cpu.
Given SPECTRE and rowhammer, I just don't believe that. This gives attackers full degrees of freedom to reactively search for a crack in the process space. With a normal statically compiled decoder, as an attacker you have a certain degree of freedom, but it is far less (unless one of your formats is a programmable environment, e.g. NGO's iOS exploit).
I, for one, would not be comfortable having content distributed as a blob that runs as a virtual machine that builds the software that decodes the content. Justine has made this practical. I can see the appeal. I just worry that we're trading in our jpgs for exes in a sandbox, and I'm not sure its worth the risk.
3 replies →
Slow down! I’m still catching up to sectorlisp!
Cool article. My first thought is that this might be a fun way to implement some really simple executable obfuscation.
Should be trivial to add running off the end into into native code, presumably after decompressing it. Process is: 0. start a decompressor; 1. decompress a decryptor; 2. call that to decrypt a composite payload; 3. run off the end into a better, native decompressor to decompress the real payload, and write it out or; 4. run into that.
I.e. it does not suffice to verify that the header is really a lambda calculus evaluator.