People here are laughing of course but I do think there is a deeper truth behind this:
> A Docker image is a piece of executable code that produces some output given some input.
The ideas behind containerization and sandboxing are rather closely related to functional programming and controlling side effects. If binaries always only read stdin and wrote to stdout, we wouldn't need sandboxes – they would be pure functions.
In the real world, though, binaries usually have side effects and I really wish we could control those in a more fine-grained manner. Ideally, binaries couldn't just do anything by default but actually had to declare all their side effects (i.e. accessing env variables, config, state, cache, logs, DBUS/Xserver/Wayland sockets, user data, shared libraries, system state, …), so that I could easily put them in a sandbox that's tailored to them.
Conversely, I'm waiting for the day when algebraic effects are so common in programming languages that I can safely execute an untrusted JavaScript function because I have tight control over what side effects it can trigger.
The best kind of absurd experiment, pushing the limits of technology and reason, just to see if it can be done. The adventurous spirit of "What if?" and "Why not!" I love when such an idea is implemented seriously, like having a CI action to test a factorial function. I shudder at its monstrous beauty.
Is there a spark of practical potential? It's intriguing to imagine, how a Docker-like container could be a language primitive, as easy to spin up like a new thread or worker. Not sure what advantage that'd bring, or any possible use case. It reminds me of..
2.1 Xappings, Xets, and Xectors
All parallelism in Connection Machine Lisp is organized around a data structure known as the zapping (pronounced “zapping,” and derived from “mapping”). Xappings are data objects similar in structure to arrays or hash tables, but they have one essential characteristic: operations on the entries of xappings may be performed in parallel.
well, sure, that uses a large number of processing cycles for each small operation. But asking a frontier LLM to evaluate a lisp expression is more or less on the same scale (interesting empirical question whether it's more or less). And, if we count operations at the brain neuron level it would take to evaluate one mentally....
I get that it's a shitpost, but if you want to take this at all seriously, a Linux container is just a Linux process in its own namespaces separate from the namespaces of its parent or at least separate from PID 1. If you're not actually doing anything requiring OCI bases and layering, as in, like any other sane program, all your functions have the same dependencies, spawn everything in the same mount namespaces at least and just use the host. Then you don't need to mount the docker socket recursively, you don't need docker or a socket at all. This isn't really as crazy as developers think it is because they think containers in Linux are just docker. You can make system calls from within the Lisp runtime itself, including unshare, and bam, you've got a container per function call without needing to shell out and accept all the overhead of a separate container runtime.
Also why are the image builds hard-coded for amd64? Are you really doing anything here that can't be done on arm?
People here are laughing of course but I do think there is a deeper truth behind this:
> A Docker image is a piece of executable code that produces some output given some input.
The ideas behind containerization and sandboxing are rather closely related to functional programming and controlling side effects. If binaries always only read stdin and wrote to stdout, we wouldn't need sandboxes – they would be pure functions.
In the real world, though, binaries usually have side effects and I really wish we could control those in a more fine-grained manner. Ideally, binaries couldn't just do anything by default but actually had to declare all their side effects (i.e. accessing env variables, config, state, cache, logs, DBUS/Xserver/Wayland sockets, user data, shared libraries, system state, …), so that I could easily put them in a sandbox that's tailored to them.
Conversely, I'm waiting for the day when algebraic effects are so common in programming languages that I can safely execute an untrusted JavaScript function because I have tight control over what side effects it can trigger.
The best kind of absurd experiment, pushing the limits of technology and reason, just to see if it can be done. The adventurous spirit of "What if?" and "Why not!" I love when such an idea is implemented seriously, like having a CI action to test a factorial function. I shudder at its monstrous beauty.
Is there a spark of practical potential? It's intriguing to imagine, how a Docker-like container could be a language primitive, as easy to spin up like a new thread or worker. Not sure what advantage that'd bring, or any possible use case. It reminds me of..
Thinking Machines Technical Report PL87-6. Connection Machine Lisp: A Dialect of Common Lisp for Data Parallel Programming. https://archive.org/details/tmc-technical-report-pl-87-6-con...
This is the most ridiculous thing I've ever seen and I love it.
So scalable! If you need to execute more functions just scale horizontally!
It does allow for a pretty clean parallel map builtin...
Just my thoughts. Love that people do just because they want to do. Love it.
Not as ridiculous as ephemeral vms for the same purpose
well, sure, that uses a large number of processing cycles for each small operation. But asking a frontier LLM to evaluate a lisp expression is more or less on the same scale (interesting empirical question whether it's more or less). And, if we count operations at the brain neuron level it would take to evaluate one mentally....
I'm sure in the future there will be technology to evaluate simple Lisp expressions in only milliseconds of time, spending mere joules of energy.
Maybe someday they'll have special hardware to efficiently run lisp expressions. A Lisp Processing Unit!
Lambda, The Ultimate Container
My first thought was a simple wrapper macro, but this is way more impressive and way more fun.
Oh no. I both hate and love this at the same time.
This is a nightmare XD
I get that it's a shitpost, but if you want to take this at all seriously, a Linux container is just a Linux process in its own namespaces separate from the namespaces of its parent or at least separate from PID 1. If you're not actually doing anything requiring OCI bases and layering, as in, like any other sane program, all your functions have the same dependencies, spawn everything in the same mount namespaces at least and just use the host. Then you don't need to mount the docker socket recursively, you don't need docker or a socket at all. This isn't really as crazy as developers think it is because they think containers in Linux are just docker. You can make system calls from within the Lisp runtime itself, including unshare, and bam, you've got a container per function call without needing to shell out and accept all the overhead of a separate container runtime.
Also why are the image builds hard-coded for amd64? Are you really doing anything here that can't be done on arm?
I'm impressed GitHub managed to handle this beast:
https://github.com/a11ce/docker-lisp/actions/runs/2216831271...
500+ container invocations to compute factorial(3)
I don't wanna know how much containers it spins up for fibonacci(3)