Comment by numpad0
1 day ago
It's just across-modal. The list of components are linear list, connections between components are graphs, placements are geometrically constrained, and overall shape is both geometric and external to the board. So you can't just mechanically derive the board from mere linear textual descriptions of it.
A lot of automagic "AGI achieved" LLM projects has this same problem, that it is assumed that brief literal prompt shall fully constrain the end result so long it is well thought out. And it's just not how it - the reality, or animal brains - works.
You need a LOT of context about what the components are and how they're being used in order to route them. Extreme case is an FPGA where a GPIO might be a DAC output or one half of a SERDES diff pair.
Doesn't even have to be that extreme: there is no way port placements of a Mac Mini can be mathematically derived from a plain English natural language prompt, and yet that's what they're trying to do. It's just the reality that not everything happen or could be done in literal languages. I guess it takes few more years before everyone accepts that.
There's nothing new in EE under the sun. Hasn't been for 40 years really. EE's min/max a bunch of mathematical equations. There's a lot of them, but it's not nearly as difficult as people think it is. They end up being design constraints, which can be coded, measured, and fed back into the AI.
It's not even been three years since Github Copilot was released to developers. And now we're all complaining about "vibe-coding".
4 replies →