Comment by rkagerer

1 year ago

In a sense, Adrian Thompson kicked this off in the 90's when he applied an evolutionary algorithm to FPGA hardware. Using a "survival of the fittest" approach, he taught a board to discern the difference between a 1kHz and 10KHz tone.

The final generation of the circuit was more compact than anything a human engineer would ever come up with (reducible to a mere 37 logic gates), and utilized all kinds of physical nuances specific to the chip it evolved on - including feedback loops, EMI effects between unconnected logic units, and (if I recall) operating transistors outside their saturation region.

Article: https://www.damninteresting.com/on-the-origin-of-circuits/

Paper: https://www.researchgate.net/publication/2737441_An_Evolved_...

Reddit: https://www.reddit.com/r/MachineLearning/comments/2t5ozk/wha...

Related. Others?

The origin of circuits (2007) - https://news.ycombinator.com/item?id=26998308), so it will get a random placement on HN's front page.

  • If you’re up for sharing, I’m curious to know approximately how many hours each week you spend working on HN. It seems like it would be an enormous amount of time, but I’m just guessing.

  • Did something funky happen to the timestamps in this thread? I could've sworn I was reading it last night (~12h ago)

    • I think dang did something manual to push it back to the frontpage, and that reset the timestamps on everyone’s existing comments…

      There is a comment here by me which says “2 hours ago”, I swear I wrote it longer ago than that - indeed, my threads page still says I wrote it 20 hours ago, so it is like part of the code knows when I really wrote it, another part now thinks I wrote it 18 hours later than I did…

      1 reply →

Fascinating paper. Thanks for the ref.

Operating transistors outside the linear region (the saturated "on") on a billion+ scale is something that we as engineers and physicists haven't quite figured out, and I am hoping that this changes in future, especially with the advent of analog neuromorphic computing. The quadratic region (before the "on") is far more energy efficient and the non-linearity could actually help with computing, not unlike the activation function in an NN.

Of course, the modeling the nonlinear behavior is difficult. My prof would say for every coefficient in SPICE's transistor models, someone dedicated his entire PhD (and there are a lot of these coefficients!).

I haven't been in touch with the field since I moved up the stack (numerical analysis/ML) I would love to learn more if there has been recent progress in this field.

  • The machine learning model didn’t discover something that humans didn’t know about. It abused some functions specific to the chip that could not be repeated in production or even on other chips or other configurations of the same chip.

    That is a common problem with fully free form machine learning solutions: They can stumble upon something that technically works in their training set, but any human who understood the full system would never actually use due to the other problems associated with it.

    > The quadratic region (before the "on") is far more energy efficient

    Take a look at the structure of something like CMOS and you’ll see why running transistors in anything other than “on” or “off” is definitely not energy efficient. In fact, the transitions are where the energy usage largely goes. We try to get through that transition period as rapidly as possible because minimal current flows when the transistors reach the on or off state.

    There are other logic arrangements, but I don’t understand what you’re getting at by suggesting circuits would be more efficient. Are you referring to the reduced gate charge?

    • > Take a look at the structure of something like CMOS and you’ll see why running transistors in anything other than “on” or “off” is definitely not energy efficient. In fact, the transitions are where the energy usage largely goes. We try to get through that transition period as rapidly as possible because minimal current flows when the transistors reach the on or off state.

      Sounds like you might be thinking of power electronic circuits rather than CMOS. In a CMOS logic circuit, current does not flow from Vdd to ground as long as either the p-type or the n-type transistor is fully switched off. The circuit under discussion was operated in subthreshold mode, in which one transistor in a complementary pair is partially switched on and the other is fully switched off. So it still only uses power during transitions, and the energy consumed in each transition is lower than in the normal mode because less voltage is switched at the transistor gate.

      1 reply →

    • The previous poster was probably thinking about very low power analog circuits or extremely slow digital circuits (like those used in wrist watches), where the on-state of the MOS transistors is in the subthreshold conduction region (while the off state is the same off state as in any other CMOS circuits, ensuring a static power consumption determined only by leakage).

      Such circuits are useful for something powered by a battery that must have a lifetime measured in years, but they cannot operate at high speeds.

    • In other words, optimization algorithms in general are prone to overfitting. Fortunately there are techniques to deal with that. Thing is, once you find a solution that generalize better to different chips, it probably won't be as small as the solution found.

  • I'm having trouble understanding. Chips with very high transistor counts tend to use saturation/turn-off almost exclusively. Very little is done in the linear region because it burns a lot of power and it's less predictable.

  • > Operating transistors outside the linear region (the saturated "on")

    Do fuzz pedals count?

    To be fair, we know they work and basically how they work, but the sonic nuances can be very hard to predict from a schematic.

  • >Operating transistors outside the linear region (the saturated "on") on a billion+ scale

    The whole point of switching transistors is that we _only_ operate them in the fully saturated on or totally off IV-curve region?

    Subthreshold circuits are commercially available, just unpopular since all the tools are designed for regular circuits. And the overlap between people who understand semiconductors and people who can make computational tools is very limited, or it's just cheaper to throw people+process shrinks at the problem.

I really wish I still had the link, but there used to be a website that listed a bunch of times in which machine learning was used (mostly via reinforcement learning) to teach a computer how to play a video game and it ended up using perverse strategies that no human would do. Like exploiting weird glitches (https://www.youtube.com/watch?v=meE5aaRJ0Zs shows this with Q*bert)

Closest I've found to the old list I used to go to is this: https://heystacks.com/doc/186/specification-gaming-examples-...

  • In my thesis many years ago [0] I used EAs to build bicycle wheels. They were so annoyingly good at exploiting whatever idiosyncrasies in my wheel-simulator. Like, the first iterations of my simulator it managed to evolve wheels that would slowly oscillate due to floating point instability or something, and when applied forces to it would increase and increase until the whole simulator exploded and the recorded forces were all over the place, of course then out-competing any wheel in at least some objective dimension.

    After fixing those bugs, I mostly struggled with it taunting me. Like building a wheel with all the spokes going from the hub and straight up to the rim. It of course would break down when rolling, but on the objective of "how much load can it handle on the bike" it again out-competed every other wheel, and thus was at the pareto-front of that objective and kept showing up through all my tests. Hated that guy, heh. I later changed it to test all wheels in at least 4 orientations, it would then still taunt me with wheels like (c) in this figure[1], exploiting that.

    [0]: https://news.ycombinator.com/item?id=10410813 [1]: https://imgur.com/a/LsONTGc

  • My favorite example was a game of pong with the goal of staying alive as long as possible. One ML algo just paused the game and left it like that.

    • My favorite was the ML learning how to optimally make the lowest-impact landing in a flight simulator— it discovered that it could wrap the impact float value if the impact was high enough so instead of figuring out the optimal landing, it started figuring out the optimal path to the highest-impact crashes.

      17 replies →

    • Is that Learnfun/Playfun that tom7 made? That one paused just before losing on tetris and left it like that, because any other input would make it lose

      1 reply →

  • Make no mistake most humans will exploit any glitches and bugs they can find for personal advantage in game. It’s just machines can exploit timing bugs better.

    • Some people are able to do frame perfect inputs semi consistently from what I understand. I don’t understand how, as my own performance is around hitting 100ms window once, every other time

      2 replies →

  • There’s a few very cool examples where someone recently used RL to solve trackmania, and ends up having to add all sorts of constraints/penalties to prevent extremely strange exploits/glitches that are discovered IIRC… been a while since I watched.

    https://youtu.be/Dw3BZ6O_8LY?si=VUcJa_hfCxjZhhfR

    https://youtu.be/NUl6QikjR04?si=DpZ-iqVdqjzahkwy

    • Well, in the case of the latter, there was a vaguely known glitch for driving on the nose that allowed for better speeds than possible on 4 wheels, but it would be completely uncontrollable to a human. He figured out how to break the problem down into steps that the NN could gradually learn piecewise, until he had cars racing around tracks while balancing on their nose.

      It turned out to have learned to keep the car spinning on its nose for stability, and timing inputs to upset the spinning balance at the right moment to touch the ground with the tire to shoot off in a desired direction.

      I think the overall lesson is that, to make useful machine learning, we must break our problems down into pieces small enough that an algorithm can truly "build up skills" and learn naturally, under the correct guidance.

  • For the model, the weird glitches are just another element of the game. As they can't reason, have no theory of world or even any real knowledge of what is doing, the model don't have the prior assumptions a human would have about how the game is supposed to be played.

    If you think about it, even using the term "perverse" is a result of us antropomorphizing any object in the universe that does anything we believe is on the realm of things humans do.

  • > using perverse strategies that no human would do

    Of course we do use perverse strategies and glitches in adversarial multiplayer all the time.

    Case in point chainsaw glitch, tumblebuffs, early hits and perfect blocks in Elden Ring

  • on youtube, codebullet remakes games so that he can try different AI techniques to beat them.

I've referenced this paper many times here; it's easily in my top 10 of papers I've ever read. It's one of those ones that, if you go into it blind, you have several "Oh no f'king way" moments.

The interesting thing to me now is... that research is very much a product of the right time. The specific Xilinx FPGA he was using was incredibly simple by today's standards and this is actually what allowed it to work so well. It was 5v, and from what I remember, the binary bitstream to program it was either completely documented, or he was able to easily generate the bitstreams by studying the output of the Xilinx router- in that era Xilinx had a manual PnR tool where you could physically draw how the blocks connected by hand if you wanted. All the blocks were the same and laid out physically how you'd expect. And the important part is that you couldn't brick the chip with an invalid binary bitstream programming. So if a generation made something wonky, it still configured the chip and ran it, no harm.

Most all, if not all modern FPGAs just cannot be programmed like this anymore. Just randomly mutating a bitstream would, at best, make an invalid binary that the chip just won't burn. Or, at worst, brick it.

I remember this paper being discussed in the novel "Science of Discworld" -- a super interesting book involving collaboration between a fiction author and some real world scientists -- where the fictional characters in the novel discover our universe and its rules ... I always thought there was some deep insight to be had about the universe within this paper. Now moreso I think the unexpectedness says something instead about the nature of engineering and control and human mechanisms for understanding these sorts of systems ... -- sort of by definition human engineering relies on linearized approximations to characterize the effects being manipulated -- so something which operates in modes far outside those models is basically inscrutable. I think that's kind of expected but the results still provoke the fascination to ponder the solutions super human engineering methods might yet find with the modern technical substrates.

  • Xe highly recommend the series! Xe keep going back to them for bedtime audio book listening. Chapters alternate between fact and fiction and the mix of intriguing narrative and drier but compelling academic talk help put xir otherwise overly busy mind to rest. In fact, xe bought softcover copies of two of them just last week.

    The science is no longer cutting edge (some are over twenty years old) but the deeper principles hold and Discworld makes for an excellent foil to our own Roundworld, just as Sir Pratchett intended.

    Indeed, the series says more about us as humans and our relationship to the universe than the universe itself and xe love that.

IIRC the flip-side was that it was hideously specific to a particular model and batch of hardware, because it relied on something that would otherwise be considered a manufacturing flaw.

  • Not even one batch. It was specific to that exact one chip it was evolved on. Trying to move it to another chip of the same model would produce unreliable results.

    There is actually a whole lot of variance between individual silicon chips, even two chips right next to each other on the wafer will preform slightly differently. They will all meet the spec on the datasheet, but datasheets always specify ranges, not exact values.

    • If I recall the original article, I believe it even went a step further. While running on the same chip it evolved on, if you unplugged the lamp that was in the closest outlet the chip the chip stopped working. It was really fascinating how environmentally specific it evolved.

      That said, it seems like it would be very doable to first evolve a chip with the functionality you need in a single environment, then slowly vary parameters to evolve it to be more robust.

      Or vice versa begin evolving the algorithm using a fitness function that is the average performance across 5 very different chips to ensure some robustness is built in from the beginning.

      3 replies →

  • long time ago, maybe in russian journal "Radio" ~198x, there was someone there describing that if one gets certain transistor from particular batch of particular factory/date, and connect it in whatever weird way, will make a full FM radio (or similar-complex-thing).. because they've wronged the yields. No idea how they had figured that out.

    But mistakes aside, what would it be if the chips from the factory could learn / fine-tune how to work (better) , on the run..

    • At my highschool, we had FM radio transmitter on the other side of street. Pretty often you could hear one of the stations in computer speakers in library, so FM radio can be detected by simple analog circuits.

      1 reply →

I remember talking about this with my friend and fellow EE grad Connor a few years ago. The chip's design really feels like a biological approach to electrical engineering, in the way that all of the layers we humans like to neatly organize our concepts into just get totally upended and messed with.

  • Biology also uses tons of redundancy and error correction that the generative algorithm approach lacks.

    • Though, the algorithm might plausibly evolve it if it were trained in a more hostile environment.

Relying on nuances of the abstraction and undefined or variable characteristics sounds like a very very bad idea to me.

The one thing you generally want for circuits is reproducibility.

  • Yeah also given that I don't know any further attempts to do this, looks like it remains just an intellectual curiosity.

I read the damn interesting post back when it came out and seeing the title of the post immediately led me to thinking of Thompson's post as well.

Yup, was coming here to basically say the same thing. Amazing innovations happen when you let a computer just do arbitrary optimization/hill climbing.

Now, you can impose additional constraints to the problem if you want to keep it using transistors properly or to not use EM side effects, etc.

This headline is mostly engagement bait as it is first nothing new and second, it is actually fully controllable.

“More compact than anything a human engineer would ever come up with” … sounds more like they built an artificial Steve Wozniak

The interesting thing about this project is that it shouldn’t even be possible if the chip behaved as an abstract logical circuit since then it would simply implement a finite automation. You must abuse the underlying physics to make the logic gates behave like something else.

So, the future is reliance on undefined but reproducible behavior

Not sure that's working out well for democracy

That's exactly what I thought of too when I saw the title.

Basically brute force + gradient descent.

Reminds of disassembled executables, unintelligible to the untrained eye.

It's even more convoluted when also re-interpreted into C language.

Designs nobody would ever come up with, but equivalent and even with compiler tricks we'd not have known.

Thompson is who I immediately thought of. Thanks for digging up the actual cite.

And this is the kind of technology we use to decide if someone should get a loan, or if something is a human about to be run over by a car.

I think I'm going to simply climb up a tree and wait this one out.

What if it invented a new kind of human, or a different kind of running over?

classic thank you! I've been trying to find this recently. I first heard about this in my genetic algorithms class more than 15 years ago.