← Back to context

Comment by Tenoke

4 years ago

I've used Ray for about a year (typically for thousands of ML tasks, spread across ~48-120 cores simultaneously) and it's a pleasure to use at least using the basic API. Admittedly, I had problems when trying to use some of the more advanced approaches but I didn't really need them and I can definitely recommend it since the performance is great.

Just out of curiosity, what kind of work requires thousands of ML tasks? (Assuming you're talking about training and not inference?)

  • The thousands of tasks are inference but I also use ray to train/update a double digit models simultaneously (~1 per user).