← Back to context

Comment by ptx

8 years ago

> introducing coprocesses in the shell

I did this with bash and Python a few years ago when I learned about the "coproc" feature (which, by the way, only supports a single coprocess per bash process, unless I misunderstood it).

But it turns out I tend to open new terminal windows a lot, which meant that the coprocess needs to relaunch all the time anyway, so it wasn't very useful. Even if I start it lazily, to avoid slowing down every shell startup, most of my Python invocations tend to be from a new shell, so there was no real benefit.

Maybe if you have a pool of worker processes that's not tied to any individual shell process, and connect to them with by a Unix-domain socket or something...

Hm yeah I was hoping to do it in a way that's compatible with unmodified bash, but maybe it will only be compatible with Oil to start.

Basically I think there should be a "coprocess protocol" that makes persistent processes look like batch processes, roughly analogous to CGI or FastCGI.

I thought that could be built on top of bash, but perhaps that's not possible.

I'll need to play with it a bit more. I think in bash you can have named multiple coprocesses with their descriptors stored in an array like ${COPROC[@]} or ${MY_COPROC[@]}. But there are definitely issues around process lifetimes, including the ones you point out. Thanks for the feedback.

  • I looked into the issue with multiple coprocesses in bash again to make sure. While you would naturally think you could simply give them different names, it's unfortunately not supported:

    https://lists.gnu.org/archive/html/bug-bash/2011-04/msg00059...

    Bash will print a warning when you start the second one, and the man page explicitly says at the bottom: "There may be only one active coprocess at a time."