Comment by bsder

6 days ago

> The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.

> and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

Solving sets of differential equations is something that's parallelizable though

See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.

  • Which differential equations are you talking about? Linear ones have standard solutions and are definitely parallelisable (though you can basically just write the solution down by hand). Non-linear ones vary from can basically be approximated by a linear solution with corrections to needing to use relaxation methods (which are obviously not parallelisable).

    Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.

    • To be specific, a linear solver can be (as in I have done) written in a week.

      A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.

      These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.

      Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.

    • The type of people who need spice is dead serious about accuracy. 1ppm error sometimes is not tolerable. So, an optimization in a game engine is definitely not suitable for engineering simulation.

      1 reply →

As others have mentioned, it's not actually that performant. The matrix solve is about as fast as a single threaded solution can do, but the problem is parallelizable. There are a number of GPU implementations and I have even heard of offloading the matrix solve to an FPGA, though without unified memory a lot of the gains are irrelevant.

Even if you avoid most of the numerical code initially, the interface in the original spice core is a mess of string handling and building a custom shell experience. There are tricks like setting the upper bit of every byte to 1 when inside quotes so that the custom shell history matching skips over things in quotes. Very elegant for the time, but now that means if you want nodes with non ascii names you're either keeping a mapping outside or using utf-7.

Another great example is the expression parsing. There was a long standing bug where the expression parser leaked ~160 bytes for every step of an output expression for every timestep. So for example, if you had "($2 * 4) + 1" as an expression and ran a simulation for 10,000 timesteps you'd leak 8M bytes.

> That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.

I bet that just compiler optimizations that LLVM could do with clean code gonna be faster

and correctness too - I guess there aren't that many hardcore electrical engineers/physicists/mathematicians that can make sure the results it makes are correct and sound, and debug weird issues coming from numerical stability.

The sort of people who can do this are very rare, and it's not likely they will just randomly decide to donate their time to rewrite the codebase.

> Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

But that's exactly the sort of exotic domain knowledge that AI models have that I don't.

That code was optimized for performance for 1980s hardware. It’s very far from optimized for modern CPUs.