← Back to context

Comment by Lerc

5 hours ago

>Our current approach with AI code is "draft a design in 15mins" and have AI implement it. The contrasts with the thoughtful approach a human would take with other human engineers. Plan something, pitch the design, get some feedback, take some time thinking through pros and cons. Begin implementing, pivot, realizations, improvements, design morphs.

Perhaps that is the distinction between reports of success with AI and reports of abject failure. Your description of "Our current approach" is nothing like how I have been working with AI.

When I was making some code to do a complex DMA chaining, the first step with the AI was to write an emulator function that produced the desired result from the parameters given in software. Then a suite of tests with memory to memory operations that would produce a verifiable output. Only then started building the version that wrote to the hardware registers ensuring that the hardware produced the same memory to memory results as the emulator. When discrepancies occurred, checking the test case, the emulator and the hardware with the stipulation that the hardware was the ground truth of behaviour and the test case should represent the desired result.

I occasionally ask LLMs to one shot full complex tasks, but when I do so it is more as a test to see how far it gets. I'm not looking to use the result, I'm just curious as to what it might be. The amount of progress it makes before getting lost is advancing at quite a rate.

It's like seeing an Atari 2600 and expecting it to be a Mac. People want to fly to the moon with Atari 2600 level hardware. You can use hardware at that level to fly to the moon, and flying to the moon is an impressive achievement enabled by the hardware, but to do so you have to wrangle a vast array of limitations.

They are no panacea, but they are not nothing. They have been, and will remain, somewhere between for some time. Nevertheless they are getting better and better.