Comment by schmidtleonard
7 months ago
Remember that GPUs have cache hierarchies and matching block sizes to optimally hit those caches is a big win that you often don't get by default, just because the number of important kernels times important GPUs times effort to properly tune one is greater than what people are willing to do for others for free in open source. Not to mention kernel fusion and API boundaries that socially force suboptimal choices for the sake of clarity and simplicity.
It's a very impressive result, but not magic, but also not cheating!
100%. LLMs are extremely useful for doing obvious but repetitive optimizations that a human might miss.
What it essentially does is a debugging/optimization loop where you change one thing, eval, repeat it again and compare results.
Previously we needed to have a human in the loop to do the change. Of course we have automated hyperparameter tuning (and similar things), but that only works only in a rigidly defined search space.
Will we see LLMs generating new improved LLM architectures, now fully incomprehensible to humans?
If I understood, isn't this software only as useful as the llm powering it is? It sounds like something very useful, but either I'm missing something or it put into a loop and a validator a "please optimize this code". Useful, but maybe not as revolutionary as the underlying llm tech itself
Edit the white paper says this: AlphaEvolve employs an ensemble of large language models. Specifically, we utilize a combination of Gemini 2.0 Flash and Gemini 2.0 Pro. This ensemble approach allows us to balance computational throughput with the quality of generated solutions. Gemini 2.0 Flash, with its lower latency, enables a higher rate of candidate generation, increasing the number of ideas explored per unit of time. Concurrently, Gemini 2.0 Pro, possessing greater capabilities, provides occasional, higher-quality suggestions that can significantly advance the evolutionary search and potentially lead to breakthroughs. This strategic mix optimizes the overall discovery process by maximizing the volume of evaluated ideas while retaining the potential for substantial improvements driven by the more powerful model.
So, I remain of my opinion before. Furthermore, in the paper they don't present it as something extraordinary as some people here say it is, but as an evolution of another existing software, funsearch
1 reply →
The “fully incomprehensible to humans” aspect of this potential future state interests me as a software person.
The last 50 years of software evolution have been driven by a need to scale human comprehension for larger and more integrated codebases. If we decreasingly need/rely on humans to understand our code, source code’s forward-progress flywheel is going to slow down and will bring us closer to (as you suggest) incomprehensibility.
Not only did we scale the breadth of codebases - the flywheel built layers and layers of abstraction over time (have you seen the code sample in this article??), fostering a growing market of professional developers and their career progressions; if most code becomes incomprehensible, itll be the code closer to “the bottom”, a thin wrapper of API on top of an expanding mass of throwaway whatever-language AlphaAlgo creates.
If we don’t wrangle this, it will destroy a profession and leave us with trillions of LoC that only people with GPUs can understand. Which may be another profession I suppose.
1 reply →
One can have obvious but repetitive optimizations with symbolic programming [1].
[1] https://arxiv.org/abs/1012.1802
Strange that AlphaEvolve authors do not compare their work to what is achievable by equality saturation. An implementation of equality saturation can take interesting integrals with very simple rules [2].
[2] https://github.com/alt-romes/hegg/blob/master/test/Sym.hs#L3...
Absolutely - not arguing that the results are unreasonable to the point of illegitimacy - just curious to see when they perform as well as reported and how well the presented solutions generalize to different test cases - or if it's routing to different solutions based on certain criteria etc.
Hey, do you have any suggestions for resources to learn more about this kind of custom optimisation? Sounds interesting, but not sure where to start?
https://ppc.cs.aalto.fi/ covers some of this (overlapping with the topics the person you responded to mentioned, but not covering all, and including some others)