[UPDATE] There is also a function (sh-redirect job redirection-args ...) - it can add arbitrary redirections to a job, including pipes, but it's quite low-level and verbose to use
Could this be abstracted enough with the right macros to make a subset of useful lisp commands play well with the shell? It could be a powerful way to extend the shell for interactive use.
I was thinking of a Lisp/Scheme-like frankenshell for a while. A REPL language (and especially a shell) should focus on ergonomics first - we're commanding the computer to do stuff here and now, not writing elaborate programs (usually).
In my opinion, the outmost parens (even when invoking Lisp functions), as well as all the elaborate glue function names, kinda kill it for interactive use. If you think about it, it's leaking the implementation details into the syntax, and makes for poor idioms. Not very Lispy.
My idea is something like:
>>> + 1 2 3
6
(And you would never know if it's /bin/+ or (define (+ ...)))
>>> seq 1 10 | sum
Let's assume seq is an executable, and sum is a Scheme function. Each "token" seq produces (by default delimited by all whitespace, maybe you could override the rules for a local context, parameterize?) is buffered by the shell, and at the end the whole thing turned into a list of strings. The result is passed to sum as a parameter. (Of course this would break if sum expects a list of integers, but it could also parse the strings as it goes.)
The other way around would also work. If seq produces a list of integers, it's turned into a list of strings and fed into sum as input lines.
The shell could scan $PATH and create a simple function wrapper for each executable.
Now to avoid unnecessary buffering or type conversion, a typed variant of Scheme could be used, possibly with multiple dispatch (per argument/return type). E.g. if the next function in the pipeline accepts an input port or a lazy string iterator, the preceding shell command wrapper could return an output port.
The tricky case with syntax is what to do with tokens like "-9", "3.14", etc. The lexer could store both the parsed value (if it is valid), and the original string. Depending on the context, it could be resolved to either, but retain strong (dynamic) typing when interacting with a Scheme function, so "3.14.15" wouldn't work if a typed function only accepts numbers.
[REWRITTEN FOR CLARITY]
Yes, although the current syntax is cumbersome - I am thinking how to improve it.
The first part is easy. If you want to run something like
the current solution is
The second part, i.e. feeding a command's output into a Scheme function, is more cumbersome.
If you want to run
the current solution requires (sh-run/string job), namely:
If instead you have a (lisp-expr2...) that reads from an integer file descriptor passed as argument - not a Scheme I/O port - you can write
[UPDATE] There is also a function (sh-redirect job redirection-args ...) - it can add arbitrary redirections to a job, including pipes, but it's quite low-level and verbose to use
I found another, possibly simpler solution.
The functions (sh-fd-stdin) (sh-fd-stdout) and (sh-fd-stderr) return the integer file descriptors that a schemesh builtin should use to perform I/O.
With them, you can do
It should work :)
Could this be abstracted enough with the right macros to make a subset of useful lisp commands play well with the shell? It could be a powerful way to extend the shell for interactive use.
2 replies →
I was thinking of a Lisp/Scheme-like frankenshell for a while. A REPL language (and especially a shell) should focus on ergonomics first - we're commanding the computer to do stuff here and now, not writing elaborate programs (usually).
In my opinion, the outmost parens (even when invoking Lisp functions), as well as all the elaborate glue function names, kinda kill it for interactive use. If you think about it, it's leaking the implementation details into the syntax, and makes for poor idioms. Not very Lispy.
My idea is something like:
(And you would never know if it's /bin/+ or (define (+ ...)))
Let's assume seq is an executable, and sum is a Scheme function. Each "token" seq produces (by default delimited by all whitespace, maybe you could override the rules for a local context, parameterize?) is buffered by the shell, and at the end the whole thing turned into a list of strings. The result is passed to sum as a parameter. (Of course this would break if sum expects a list of integers, but it could also parse the strings as it goes.)
The other way around would also work. If seq produces a list of integers, it's turned into a list of strings and fed into sum as input lines.
The shell could scan $PATH and create a simple function wrapper for each executable.
Now to avoid unnecessary buffering or type conversion, a typed variant of Scheme could be used, possibly with multiple dispatch (per argument/return type). E.g. if the next function in the pipeline accepts an input port or a lazy string iterator, the preceding shell command wrapper could return an output port.
The tricky case with syntax is what to do with tokens like "-9", "3.14", etc. The lexer could store both the parsed value (if it is valid), and the original string. Depending on the context, it could be resolved to either, but retain strong (dynamic) typing when interacting with a Scheme function, so "3.14.15" wouldn't work if a typed function only accepts numbers.
Reminds me of Tcl a bit.