Comment by jsd1982
10 days ago
I've built a VST3 plug-in that simulates a Mesa Boogie Mark IIC+ preamp purely from the circuit.
The approach doesn't seem popular for professional plug-ins likely because it wasn't viable for real time until modern CPU enhancements became available. Performance scales with frequency of the input which is interesting and seems to be a consequence of using an iterative solver on a system of equations and using the previous sample's state vector as a guess for the current sample.
On my MacBook M3 it requires between 50 to 70% of a single core to produce a 2x oversampled output at 48000Hz. This can be scaled back by reducing the solution tolerance bounds and get down near 25% with minimal quality loss.
How does the performance and accuracy compare to modeling the same amp using NAM or similar AI tool?
Not sure, don't have any of those. The response feels very natural from my own playing.
Here's a little announcement video I put together a week ago for an earlier version:
https://m.youtube.com/watch?v=xEy34cuOPaY