← Back to context

Comment by deerstalker

5 days ago

Very cool. Evolutionary Algorithms have kinda been out of the mainstream for a long time. They are good when you can do a lot of "black-box" function evaluations but kinda suck when your computational budget is limited. I wonder if coupling them with ML techniques could bring them back.

> I wonder if coupling them with ML techniques could bring them back.

EAs are effectively ML techniques. It's all a game of search.

The biggest problem I have seen with these algorithms is that they are wildly irrespective of the underlying hardware that they will inevitably run on top of. Koza, et. al., were effectively playing around in abstraction Narnia when you consider how impractical their designs were (are) to execute on hardware.

An L1-resident hill climber running on a single Zen4+ thread would absolutely smoke every single technique from the 90s combined, simply because it can explore so much more of the search space per unit time. A small tweak to this actually shows up on human timescales and so you can make meaningful iterations. Being made to wait days/weeks each time you want to see how your idea plays out will quickly curtail the space of ideas.

  • > A small tweak to this actually shows up on human timescales and so you can make meaningful iterations.

    Please could you explain what you meant by this part? I'm trying and failing to understand it.

    • An L1-resident algorithm can outperform one that needs to talk to DRAM each iteration by 100x or more. In terms of wall clock time, this can mean the difference between minutes and days.

      Would you be willing to try a fleeting idea if it took 2 days to test? How about if we could bring that down to 15 minutes?

      1 reply →

The main use of evolutionary algorithms in machine learning currently is architecture search for neural networks. There's also work on pipeline design, finding the right way to string things together.

Neural networks already take a long time to train so throwing out gradient descent entirely for tuning weights doesn't scale great.

Genetic programming can solve classic control problems with a few instructions when they can solve it, so that's cool.

They're not in the press a lot. They're probably still in production behind the scenes. I was reading about using them for scheduling not long ago. Btw, a toy one I wrote to show how they work got best results with tournament selection with significant mutations (closer to 20%).

There's a lot of problems where you're searching among many possibilities in a space that has lots of pieces in each solution. If you can encode the solution and fitness, a GA can give you an answer if you play with the knows enough. You also might not need to be an expert in that domain, like writing heuristics. If you know some, they might still help.

We typically would solve a lot of the same types of problems with RL today because it’s more efficient.

In EA if a candidate fails we throw it away. In RL we learn from that experience.

RL gets harder when rewards are really sparse. OpenAI developed evolution strategies which is a bit of a hybrid.

Much more robust than almost all modern ML algorithms which let's be real, aren't exactly applicable to anything outside recommendation systems and 2D image processing.

  • I can't tell if this is a joke

    • Genetic algorithms' weaknesses largely boil down to getting stuck in local extrema and premature convergence, which can be resolved by trying different values for parameters like probability of mutation, trying different genetic operators, offspring/parent ratio etc.

      Meanwhile you have a whole separate discipline [1] for potential weaknesses on machine learning algorithms. Of course they may win when it comes to interdisciplinary ubiquity in CS, but any algorithm that relies on data assimilation and has little analytic formulation will suffer in robustness.

      [1] https://en.wikipedia.org/wiki/Adversarial_machine_learning

      2 replies →