← Back to context

Comment by moritonal

7 months ago

For the people awaiting the singularity, lines like this written almost straight from science fiction:

> By suggesting modifications in the standard language of chip designers, AlphaEvolve promotes a collaborative approach between AI and hardware engineers to accelerate the design of future specialized chips."

Here is the relevant bit from their whitepaper (https://storage.googleapis.com/deepmind-media/DeepMind.com/B...):

> AlphaEvolve was able to find a simple code rewrite (within an arithmetic unit within the matmul unit) that removed unnecessary bits, a change validated by TPU designers for correctness.

I speculate this could refer to the upper bits in the output of a MAC circuit being unused in a downstream connection (perhaps to an accumulation register). It could also involve unused bits in a specialized MAC circuit for a non-standard datatype.

> While this specific improvement was also independently caught by downstream synthesis tools, AlphaEvolve’s contribution at the RTL stage demonstrates its capability to refine source RTL and provide optimizations early in the design flow.

As the authors admit, this bit-level optimization was automatically performed by the synthesis tool (the equivalent to this in the software-world is dead code elimination being performed by a compiler). They seem to claim it is better to perform this bit-truncation explicitly in the source RTL rather than letting synthesis handle it. I find this dubious since synthesis guarantees that the optimizations it performs do not change the semantics of the circuit, while making a change in the source RTL could change the semantics (vs the original source RTL) and requires human intervention to check semantic equivalence. The exception to this is when certain optimizations rely on assumptions of the values that are seen within the circuit at runtime: synthesis will assume the most conservative situation where all circuit inputs are arbitrary.

I do agree that this reveals a deficiency in existing synthesis flows being unable to backannotate the source RTL with the specific lines/bits that were stripped out in the final netlist so humans can check whether synthesis did indeed perform an expected optimization.

> This early exploration demonstrates a novel approach where LLM-powered code evolution assists in hardware design, potentially reducing time to market.

I think they are vastly overselling what AlphaEvolve was able to achieve. That isn't to say anything about the potential utility of LLMs for RTL design or optimization.

This just means that it operates on the (debug text form of the) intermediate representation of a compiler.

Sure but remember that this approach only works for exploring an optimization for a function which has a well defined evaluation metric.

You can't write an evaluation function for general "intelligence"...

Honestly it's this line that did it for me:

> AlphaEvolve enhanced the efficiency of Google's data centers, chip design and AI training processes — *including training the large language models underlying AlphaEvolve itself*.

Singularity people have been talking for decades about AI improving itself better than humans could, and how that results in runaway compounding growth of superintelligence, and now it's here.

  • Most code optimizations end up looking somewhat asymptotic towards a non-zero minimum.

    If it takes you a week to find a 1% speedup, and the next 0.7% speedup takes you 2 weeks to find ... well, by using the 1% speedup the next one only takes you 13.86 days. This kind of small optimization doesn't lead to exponential gains.

    That doesn't mean it's not worthwhile - it's great to save power & money and reduce iteration time by a small amount. And it combines with other optimizations over time. But this is in no way an example of the kind of thing that the singularity folks envisioned, regardless of the realism of their vision or not.

    • Exactly - the possible improvements may compound, but they converge logarithmically towards an upper limit absent new insight that establishes a new upper limit.

  • Long way to go Singularity. We don't even know if its possible.

    Basically, singularity assumes that you can take the information about the real world "state", and compress it into some form, and predict the state change faster than reality happens. For a subset of the world, this is definitely possible. But for entire reality, it seems that there is a whole bunch of processes that are computationally irreducible, so an AI would never be able to "stay ahead" or so to to speak. There is also the thing about computational ir-reversibility - for example observing a human behavior is seeing the output of a one way hashing function of neural process in our brain that hides a lot of detail and doesn't let you predict it accurately in all cases.

    Also, optimization algorithms are nothing new. Even before AI, you could run genetic algorithm or PSO on code, and given enough compute it would optimize the algorithm, including itself. The hard part that nobody has solved this is abstracting this to a low enough level to where its applicable across multiple layers that correspond to any task.

    For example, let say you have a model (or rather an algorithm) that has only a single interface, and that is the ability to send ethernet packets, and it hasn't been trained on any real world data at all. If you task it with building you a website that makes money, the same algorithm that iterates over figuring out how to send IP packets then TCP packets then HTTP packets should also be able to figure out what the modern world wide web looks like and what concepts like website and money is, building its knowledge graph and searching on it and interpolating on it to figure out how to solve the problem.

We are further getting to the point where no one on the planet understand how any of this stuff really works. This will last us until a collapse. Then we are done for.

The singularity has always existed. It is located at the summit of Mount Stupid, where the Darwin Awards are kept. AI is really just psuedo-intelligence; an automated chairlift to peak overconfidence.

  • I love these confident claims! It sounds like you really know what you are talking about. It's either that or you are projecting. Could you elaborate? I for one find the level of intelligence quite real, I use AIs to do a lot of quite complex stuff for me nowadays. I have an agent that keeps my calendar, schedules appointments with people that want meetings with me, summarizes emails and add these summaries to notion and breaks them up in todo-lists, answers questions about libraries and APIs, writes most of my code (although I do need to hold it's hand and it cannot improve by learning from me).