Show HN: I used AI to recreate a $4000 piece of audio hardware as a plugin

1 month ago

Hi Hacker News,

This is definitely out of my comfort zone. I've never programmed DSP before. But I was able to use Claude code and have it help me build this using CMajor.

I just wanted to show you guys because I'm super proud of it. It's a 100% faithful recreation based off of the schematics, patents, and ROMs that were found online.

So please watch the video and tell me what you think

https://youtu.be/auOlZXI1VxA

The reason why I think this is relevant is because I've been a programmer for 25 years and AI scares the shit out of me.

I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!

Thanks!

I used to do that exact job 10 years ago (without AI, obviously). I figure that career would be very different now.

There was something exciting about sleuthing out how those old machines worked: we used a black box approach, sending in test samples, recording the output, and comparing against the digital algorithm’s output. Trial and error, slowly building a sense of what sort of filter or harmonics could bend a waveform one way or another.

I feel like some of this is going to be lost to prompting, the same way hand-tool woodworking has been lost to power tools.

  • It will be the future for sure, software as a tool for everyone.

    While there is something lost in prompting, people will always seek out first-principles so they can understand what they are commanding and controlling, especially as old machines become new machines with new capabilities not even imaginable before due to the old software complexity wall.

    • It's exactly as you say: software as a tool for everyone and it's hard for programmers like me to accept that because I've spent so much time, read so many books, and work so hard perfecting my craft.

      But smart programmers will realize the world doesn't care about any of that at all.

      7 replies →

  • I wonder if we could then have released the *stressor in a few months then...

    • I’d love to see someone try.

      Though using AI to build the devtools we used for signal analysis would have been helpful.

I was hoping that the video was a walkthrough of your process - do you think you might share that at some point?

> I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!

Yes, I agree. I think the role of software developer is going to evolve into much more of an administrative, managerial role, dealing more with working with whatever organisation you're in than actually typing code. Honestly I think it probably was always heading in this direction but it's definitely quite a step change. Wrote about it a little incoherently on my blog just this morning: https://redfloatplane.lol/blog/11-2025-the-year-i-didnt-writ...

  • As someone who works at a place where we do a lot of code analysis and also research AI's effect on code quality, if you do not even so much as look at your code anymore, I do not believe you are creating maintainable, quality software. Maybe you don't need to, or care to, but it's definitely not what's sustainable in long-term product companies.

    AI is a force multiplier - it makes bad worse, it _can_ make good better. You need even more engineering disciplines than before to make sure it's the latter and not the former. Even with chaining code quality MCP's and a whole bunch of instructions in AGENTS.md, there's often a need to intervene and course adjust, because AI can either ignore AGENTS.md, or because whatever can pass code quality checks does not always mean the architecture is something that's solid.

    That being said, I do agree our job is changing from merely writing code, to more of a managerial title, like you've said. But, there's a new limit - your ability to review the output, and you most definitely should review the output if you care about long-term sustainable, quality software.

    • 6 months ago I agreed with your statement

      but AI being solely a force multiplier is not accurate, it is a intelligence multiplier. There are significantly better ways now to apply skills and taste with less worry about technical debt. AI coding agents have gotten to the point that it virtually removes ALL effort barrierrs even paying off technical debt.

      While it is still important to pay attention to the direction your code is being generated, the old fears and caution we attributed to previous iteration of AI codegen is largely being eroded and this trend will continue to the point where our "specialty" will no longer matter.

      I'm already seeing small businesses that laid off their teams and the business owner is generating code themselves. The ability to defend the thinning moat of not only software but virtually all white collar jobs is getting tougher.

    • > if you care about long-term sustainable, quality software

      If software becomes cheaper to make it amortizes at a higher rate, ie, it becomes less valuable at a faster clip. This means more ephemeral software with a shorter shelf-life. What exactly is wrong with a world where software is borderline disposable?

      I’ve been using Photoshop since the 90s and without having watched the features expand over the years I don’t think I would find the tool useful for someone without a lot of experience.

      This being said, short-lived and highly targeted, less feature-full software for image creation and manipulation catered to the individual and specific to an immediate task seems advantageous.

      Dynamism applied not to the code but to the products themselves.

      Or something like that.

      3 replies →

    • Yes, I didn't do a great job of managing my language in that post (I blame flu-brain). In the case where _someone_ is going to be reading the code I output, I do review it and act more as the pilot-not-flying rather than as a passenger. For personal code (as opposed to code for a client), which is the majority of stuff that I've written since Opus 4.5 released, that's not been the case.

      I'll update the post to reflect the reality, thanks for calling it out.

      I completely agree with your comment. I think the ability to review code, architecture, abstractions matters more than the actual writing of the code - in fact this has really always been the case, it's just clearer now that everyone has a lackey to do the typing for them.

How can you say it's a 100% faithful recreation if you've never programmed DSP before?

  • Indeed, same questions few days ago when somebody shared a "generated" NES emulator. We have to make this answered when sharing otherwise we can't compare.

    • At some point the llm ingested a few open source NES emulators and many articles on their architecture. So i question the llm creativity involved with these types examples. Probably also for dsps.

      2 replies →

    • I’m not claiming a 100% faithful physical recreation in the strict scientific sense.

      If you look at my other comment in this thread, my project is about designing proprioceptive touch sensors (robot skin) using a soft-body simulator largely built with the help of an AI. At this stage, absolute physical accuracy isn’t really the point. By design, the system already includes a neural model in the loop (via EIT), so the notion of "accuracy" is ultimately evaluated through that learned representation rather than against raw physical equations alone.

      What I need instead is a model that is faithful to my constraints: very cheap, easily accessible materials, with properties that are usually considered undesirable for sensing: instability, high hysteresis, low gauge factor. My bet is that these constraints can be compensated for by a more circular system design, where the geometry of the sensor is optimized to work with them.

      Bridging the gap to reality is intentionally simple: 3D-print whatever geometry the simulator converges to, run the same strain/stress tests on the physical samples, and use that data to fine-tune the sensor model.

      Since everything is ultimately interpreted through a neural network, some physical imprecision upstream may actually be acceptable, or even beneficial, if it makes the eventual transfer and fine-tuning on real-world data easier.

      4 replies →

  • I had the hardware for both units and use them extensively so 100% familiar with how they sound.

    And I'm not doing it based off of my ears. I know the algorithm, have the exact coefficients, and there was no guesswork except for the potentiometer curves and parts of the room algorithm that I'm still working out, which is a completely separate component of the reverb.

    But when I put it up for sale, I'll make sure to go into detail about all that so people who buy it know what they're getting.

  • Standard AI response. Similar to " production-ready", "according to industry standards" or "common practices" to justify and action or indicating it is done, without even compiling or running code, let alone understand the output. An AI can't hear, and even worse, relate this. Ask it to create a diode ladder filter, and it will boost it created a "physically correct analog representation" while output ting clean and pure signals...

    • For context, I'm working on a proper SPICE component-level Diode Ladder.

      I tried this for laughs with Gemini 3 Pro. It spit out the same ZDF implementation that is on countless GitHub repos, originating from the 2nd Pirkle FX book (2019).

      1 reply →

  • Maybe the OP has the hardware and can compare the sound both subjectively and objectively? Does it have to be 100% exact copy to be called the same? (Individual electronic components are never the same btw)

    • The OP didn't clarify. But if there's a claim of 100% faithful recreation, I'd expect something to back it up, like time- and frequency-domain comparisons of input and output with different test signals. Or at least something. But there isn't anything.

      The video claims: "It utilizes the actual DSP characteristics of the original to bring that specific sound back to life." The author admits they have never programmed DSP. So how are they verifying this claim?

      2 replies →

Very nice work. I’m curious: what kinds of projects are you guys currently working on that genuinely push you out of your comfort zone?

I had a small epiphany a couple of weeks ago while thinking about robot skin design: using conductive 3D-printed structures whose electrical properties change under strain, combined with electrical impulses, a handful of electrodes, a machine-learning model to interpret the measurements, and computational design to optimize the printed geometry.

While digging into the literature, I realized that what I was trying to do already has a name: proprioception via electrical impedance tomography. It turns out the field is very active right now.

https://www.cam.ac.uk/stories/robotic-skin

That realization led me to build a Bergström–Boyce nonlinear viscoelastic parallel rheological simulator using Taichi. This is far outside my comfort zone. I’m just a regular programmer with no formal background in physics (apart from some past exposure to Newton-Raphson).

Interestingly, my main contribution hasn’t been the math. It’s been providing basic, common-sense guidance to my LLM. For example, I had to explicitly tell it which parameters were fixed by experimental data and which ones were meant to be inferred. In another case, the agent assumed that all the red curves in the paper I'm working with referred to the same sample, when they actually correspond to different conducting NinjaFlex specimens under strain.

Correcting those kinds of assumptions, rather than fixing equations, was what allowed me to reproduce the results I was seeking. I now have an analytical, physics-grounded model that fits the published data. Mullins effect: modeled. Next up: creep.

We’ll see how far this goes. I’ll probably never produce anything publishable, patentable, or industrial-grade. But I might end up building a very cheap (and hopefully not that inaccurate), printable proprioceptive sensor, with a structure optimized so it can be interpreted by much smaller neural networks than those used in the Cambridge paper.

If that works, the gesture will have been worth it.

Isn't that like the Ursa Major Stargate 323 Reverb? Greybox audio released code for this about a year ago: https://github.com/greyboxaudio/SG-323

  • Thanks for mentioning this project, I have been looking for a good reverb plugin for Linux for a while now and this sounds great.

    • There might be a plugin based on freeverb, which is also a good sounding one. I ohave it as a logue unit, so can't recommend one immediately. At least I know greybox based on actual device comparison, as he owns one and has been doing this for 5 years sans AI.

This is fantastic. I’m currently building a combustion engine simulator doing exactly what you did. In fact, I found a number of research papers, had Claude implement the included algorithms, and then incorporated them into the project.

What I have now is similar to https://youtu.be/nXrEX6j-Mws?si=XdPA48jymWcapQ-8 but I haven’t implemented a cohesive UI yet.

  • Right on that's awesome! I think I'm doing more what you did vs. the other way around. Looks like you're pretty established. How long did it take to build your YouTube to what it is? What's that process been like?

awesome, in 2025 I made a few apps for my small business that I have spent hours trawling the web looking for, and I have little coding skills.

Sometimes it feels like I'm living in a different world, reading the scepticism on here about AI.

I'm sure there enterprise cases where it doesn't make sense, but for the your everyday business owner it's amazing what can be done.

maybe it's a failure of imagination but I can't imagine a world where this doesn't impact enterprise in short order

  • With all due respect you are living in a different world. Not in a bad way, it’s just you haven’t experienced what maintenance on a large complicated code base is like.

  • No you're absolutely right. One of the things I'm starting to see and I wrote another Hacker News post about this is that more people are starting to come out talking about all the mistakes AI is making even as it gets better. Then You've got people like Karpathy talking about how drastic the landscape is shifting

    I've been doing this for 25 years and I can tell you that the AI is a better coder than me, but I know how to use it. I reviewed the code that it puts out and it's better. I'm assuming the developers that are having a hard time with it are just not as experienced with it.

    If you think your job is going to stay programmer, I just don't see it. I think you need to start providing value and using coding as just a means to do that, more so than coding being valuable in itself. It's just not as valuable anymore.

I don’t think we were ever supposed to be programmers. A lot of us are scared because they assumed knowing every detail of a system or language and being able to conjure a system with code was the point of your profession. But it was always building things, or, engineering, just with different tools. If we get to the point where we can ask AI to 3d print a spaceship and also build JARVIS into it for navigation, then your job will become something else, like figuring out how to build brain computer interfaces as we get on our way to becoming cyborgs or whatever for FTL journeys. Building interfaces will not be a thing we will do anymore, as UIs will just be conjured on the fly, contextually, by the AI.

Our challenge will always be to keep track of all the foundational knowledge so we can rebuild it all if it comes crashing down (AI or some other event tries to end us).

You should feel exited about it, and level up to the next thing where you will be needed, which is to build reliable heterogeneous, self healing systems, often without having a contract between them.

This will mean you can conjure up an entire tax management system, a financial system, a government management system, quickly and have them all talk to each other so people can just go about their lives.

A dam is built, and you immediately have a system that can operate it and all of its equipment.

This may give manufacturers freedom to innovate without worrying about breaking things. Just install it and let the AI learn it, tell you if it needs to calibrate the new equipment, or adjust the existing system to better integrate it, take better advantage of it, etc.

There is so much to do in that and many other directions (I mean healthcare, etc, why not eat bigpharma’s lunch?) that we should be excited and not afraid. Of course current AI is nowhere near this, and maybe what enables this will be in an entirely different shape, but that we’re all putting effort into getting there instead of worrying about Angular vs React is what I love the most.

I'm not in the domain, even though I did dabble with DAW and tinker with a PGB-1 and its open source firmware, but how far would you say CMajor helped? I feel like solely picking the right tool, being framework, paradigm, etc can make or break a project.

Consequently here for me to better understand how special this is I'd appreciate how (especially since I don't see a link to code itself) how does one go to e.g. https://cmajor.dev/docs/GettingStarted#creating-your-first-p... to a working DSP.

On your "Scares the shit out of me" comment.

Use AI like a CNC machinist uses a mill. You're still in the loop, but break it into manageable "passes" with testing touchpoints. These touchpoints allow you to understand what's going on. Nothing wrong with letting AI oneshot something, but it's more fun and less ennui to jump in and look around and exercise some control here and there. And, on larger systems, this is basically required. (for now, perhaps).

This is how I do it now: https://jodavaho.io/posts/ai-useage-2025.html

  • Exactly this! I am a retired EE just messing around with AI in my homelab datacenter and that has been my approach as well. Amazing force multiplier, I can finally create more or less what I want with sw based on first principles and basic systems engineering approaches by just guiding AI. I have used golang and ansible and terraform and typescript, languages I never had time to learn and now I can create a working tools/solutions to whatever is my need at the moment. Other day my STT subscription app became too laggy so I asked Claude to spin up an endpoint on one of my GPU boxes, create a proxy server, intercept transcript cleanup calls and create traces in langfuse, setup a prompt eval framework, etc. We make a plan, iterate on it, I usually get Codex or Gemini in on the call as well and in a couple of hours I have a good enough solutions for my personal needs. This would have been probably a weekend or more project before. Skepticism here does remind me a bit of when I learned how to use handtools for woodworking. Ultimately it was nice to be able to make mortises by hand with chisel but damn if using festool dominoes is not that much more productive.

Great achievement!

Regarding your own titleing: you are now some type of "platform operator/manager" of this agents :-))

Did you recreate the UI only or also the internal circuits? Does it produce a similar distorsion?

  • I created the UI and the internal circuits but it's a hundred percent DSP. The SST206 is a recreation of the SST282 (in DSP) and he expanded the bandwidth from 7 kHz to 22 kHz, so it doesn't produce distortion but it can get dark like the original. But yeah so the SST206 it's not grungy like the original so it lacks some of the character. It makes up for it in delay time.

[under-the-rug stub]

[see https://news.ycombinator.com/item?id=45988611 for explanation]

We are all glorified QA testers with software architect title now. Sure we set the structure of what we want but everything else the AI does and most of our time is now spent testing and complaining to the AI.

Pretty soon AI will do the QA portion as well. It will generate any piece of software, even games for a cool $200/month from a vendor of choice: Microsoft (OpenAI) or Google

Companies will stop paying for SaaS or complex ERP software, they will just generate their own that only the AI knows how to maintain, run, and add features.

It's ironic that software developers are the most enthusiastic about automating their jobs out of existence. No union, no laws that interfere with free market forces.

  • Maintenance still has cost and will surely exist in some fashion?

    But if that is going to be the case, I want to be the best of the best at understanding it all so that I’m the first hired and last fired lol