Comment by Lerc
1 year ago
>The behavior could change from one manufacturing run to another. The behavior could disappear altogether in a future revision of the chip.
That's the overfitting they were referring to. Relying on the individual behaviour is the overfit. Running on multiple chips (at learning time) reduces the benefit of using an improvement that is specific to one chip.
You are correct that simulation is the better solution, but you have to do more than just limit to the operating range of the components, you have to introduce variances similar to the specified production precision. If the simulator made assumptions that the behaviour of two similar components was absolutely identical to each other then within tolerance manufacturing errors could be magnified.
If you simply buy multiple chips at once and train on them, you may overfit because they are all likely from the same wafer. If you spent an effort and bought chips from multiple sources, they might end up being all the same hardware revision. And even if you got all existing hardware revisions, there is no guarantees that the code will keep working on new hardware revisions which has not came out yet.
There is also problems with chips aging, related circuitry (filtering capacitors age too, and the power gets worse over time), operating temperature, faster degradation from unusual conditions...
As long as all you look at is inputs and outputs, it is impossible to not to overfit. For a robust system, you need to look at the official, published spec, because that's what the manufacturer guarantees and tests for - and AI cannot do this.
> For a robust system, you need to look at the official, published spec, because that's what the manufacturer guarantees and tests for - and AI cannot do this.
Why not? All you have to do is run it in a simulator.