Comment by bob1029
5 days ago
> I wonder if coupling them with ML techniques could bring them back.
EAs are effectively ML techniques. It's all a game of search.
The biggest problem I have seen with these algorithms is that they are wildly irrespective of the underlying hardware that they will inevitably run on top of. Koza, et. al., were effectively playing around in abstraction Narnia when you consider how impractical their designs were (are) to execute on hardware.
An L1-resident hill climber running on a single Zen4+ thread would absolutely smoke every single technique from the 90s combined, simply because it can explore so much more of the search space per unit time. A small tweak to this actually shows up on human timescales and so you can make meaningful iterations. Being made to wait days/weeks each time you want to see how your idea plays out will quickly curtail the space of ideas.
> A small tweak to this actually shows up on human timescales and so you can make meaningful iterations.
Please could you explain what you meant by this part? I'm trying and failing to understand it.
An L1-resident algorithm can outperform one that needs to talk to DRAM each iteration by 100x or more. In terms of wall clock time, this can mean the difference between minutes and days.
Would you be willing to try a fleeting idea if it took 2 days to test? How about if we could bring that down to 15 minutes?
Ok, I think I get you now. This sentence was in context of the previous one, about an algorithm that never needs to leave L1.
I thought you were referring to some real world (I mean physical world) example of evolution finding efficiency in some manner.