← Back to context

Comment by matt3210

2 days ago

Can you show some example? I feel like there would be streams or YouTube lets plays on this if it was working well

I would like to see it as well. It seems to me that everybody sells shovels only. But nobody haven’t seen gold yet. :)

The real secret to agent productivity is letting go of your understanding of the code and trusting the AI to generate the proper thing. Very pro agent devs like ghuntley will all say this.

And it makes sense. For most coding problems the challenge isn’t writing code. Once you know what to write typing the code is a drop in the bucket. AI is still very useful, but if you really wanna go fast you have to give up on your understanding. I’ve yet to see this work well outside of blog posts, tweets, board room discussions etc.

  • > The real secret to agent productivity is letting go of your understanding of the code and trusting the AI to generate the proper thing

    The few times I've done that, the agent eventually faced a problem/bug it couldn't solve and I had to go and read the entire codebase myself.

    Then, found several subtle bugs (like writing private keys to disk even when that was an explicit instruction not to). Eventually ended up refactoring most of it.

    It does have value on coming up with boilerplate code that I then tweak.

  • That's just irresponsible advice. There is so little actual evidence of this technology being able to produce high quality maintainable code that asking us to trust it blindly is borderline snake-oil peddling.

  • I don’t see how I would feel comfortable pushing the current output of LLMs into high-stakes production (think SLAs, SRE).

    Understanding of the code in these situation is more important than the code/feature existing.

    • You can use an agent while still understanding the code it generates in detail. In high stakes areas, I go through it line by line and symbol by symbol. And I rarely accept the first attempt. It’s not very different from continually refining your own code until it meets the bar for robustness.

      Agents make mistakes which need to be corrected, but they also point out edge cases you haven’t thought of.

      1 reply →

    • I agree and am the same. Using them to enhance my knowledge and as well as autocomplete on steroids is the sweet spot. Much easier to review code if im “writing” it line by line.

      I think the reality is a lot of code out there doesn’t need to be good, so many people benefit from agents etc.

  • > The real secret to agent productivity is letting go of your understanding of the code

    This is negligence, it's your job to understand the system you're building.

  • Not to blow your bubble, but I've seen agents expose Stripe credentials by hardcoding them as text into a react frontend app, so, no kids, do not "let go" of code understanding, lest you want to appear as the next story along the lines of "AI dropped my production database".

A lot of that would be people working on proprietary code I guess. And most of the people I know who are doing this are building stuff, not streaming or making videos. But I'm sure there must be content out there—none of this is a secret. There are probably engineers working on open source stuff with these techniques who are sharing it somewhere.