Comment by corndoge
9 hours ago
Yep, the real strength of AI is less in replacing engineering skills, it's more in slashing all the time we spend not using those skills and doing low level research and data correlation tasks instead. Which isn't to say that those tasks aren't valuable in their own way, but in terms of raw output...
I long for the day when they will supervise CI/CD systems.
Trying to fix syntax errors in strong interpolation on a 5-minute-delay loop is hell.
Just create a skill for it -> I call mine `babysit`. It spins up a subagent that polls it every x minutes and auto-fixes it until it's green. I already continue with the next task while it does that in the background
I do this with our AI PR review checks. We have AI review every PR and commits to PRs... which can cause long running loops of commit<>fix.
So my agent just listens for green checks and no PR comments and loops until those conditions are met.
It is possible. I tell to use cli app, and for the agent to ad timer and check the status once in a while. Especially if there is something with a long wait. Also if it can run some validators/ same tools locally, would be much faster.
Might tend to deviate and waste time, needs guiding once in a while, and to check what is it spewing out, point it in the correct direction.
I treat the low level tasks as building blocks. You need a grasp and understanding of what is possible with them, but you do not need to remember the exact byte order and syntax. I think the idea is you should structure your workflow in a deterministic way, and just use Claude/ LLM as the interface. It is much easier and enjoyable to use high level language, where you give pointers to building blocks/ directions/ say hard no when you understand things deviate.
If I had to output the code myself, would take around 8 hours of constant writing to get around 1k LoC of code. For FUSE level tricky stuff, I might need to spend 3 weeks for 10 LoC. Very easy to burnout and build pain.