← Back to context

Comment by prescriptivist

19 hours ago

I think fibers (or, rather, the Async library) in Ruby tends to be fetishized by junior Rails engineers who don't realize higher level thread coordination issues (connection pools, etc) equally apply to fibers. That said this could be a pretty good use case for fibers -- the code base I use every day has ~230 gems and if you can peel off the actual IO bound installation of all those into non-blocking calls, you would see a meaningful performance difference vs spinning up threads and context switching between them.

What I would do to really squeeze the rest out in pure ruby (bear in mind I’ve been away about a decade so there _might be_ new bits but nothing meaningful as far as I know): Use a cheaper to parse index format (the gists I wrote years ago cover this: https://gist.github.com/raggi/4957402) Use threads for the initial archive downloads (this is just io, and you want to reuse some caches like the index) Use a few forks for the unpacking and post install steps (because these have unpredictable concurrency behaviors)

  • > there _might be_ new bits but nothing meaningful as far as I know

    If you didn't need backwards compatibility with older rubies you could use Ractors in lieu of forks and not have to IPC between the two and have cleaner communication channels. I can peg all the cores on my machine with a simple Ractor pool doing simple computation, which feels like a miracle as a Ruby old head. Bundler could get away with creating their own Ractor safe installer pool which would be cool as it'd be the first large scale use of Ractors that I know of.