Comment by xwowsersx

5 days ago

I think one of the reasons "coding with AI" conversations can feel so unproductive, or at least vague, to me, is that people aren't talking about the same thing. For some, it means "vibe coding" ... tossing quick prompts into something like Cursor, banging out snippets, and hoping it runs. For others, it's using AI like a rubber duck: explaining problems, asking clarifying questions, maybe pasting in a few snippets. And then there's the more involved mode, where you're having a sustained back-and-forth with multiple iterations and refinements. Without recognizing those distinctions, the debate tends to talk past itself.

For me, anything that feels like anything remotely resembling a "superpower" with AI starts with doing a lot of heavy lifting upfront. I spend significant time preparing the right context, feeding it to the model with care, and asking very targeted questions. I'll bounce ideas back and forth until we've landed on a clear approach. Then I'll tell the model exactly how I want the code structured, and use it to extend that pattern into new functionality. In that mode, I'm still the one initializing the design and owning the understanding...AI just accelerates the repetitive work.

In the end, I think the most productive mindset is to treat your prompt as the main artifact of value, the same way source code is the real asset and a compiled binary is just a byproduct. A prompt that works reliably requires a high degree of rigor and precision -- the kind of thinking we should be doing anyway, even without AI. Measure twice, cut once.

If you start lazy, yes...AI will only make you lazier. If you start with discipline and clarity, it can amplify you. Which I think are traits that you want to have when you're doing software development even if you're not using AI.

Just my experience and my 2c.

Have you quantified all of this work in a way that demonstrates you save time vs just writing the code yourself?

  • Just yesterday I gave gemini code a git worktree of the system I'm building at work. (Corp approved yadda yadda).

    Can't remember the prompt was "evaluate the codebase and suggest any improvements. specifically on the <nameofsystem> system"

    Then I tabbed out and did other stuff

    Came back a bit later, checked out its ramblings. It misunderstood the whole system completely and tried to add a recursive system that wasn't even close to what it was supposed to be.

    BUT it had detected an error message that just provided an index where parsing failed on some user input like "error at index (10)", which is completely useless for humans. But that's what the parser library gives us, so it's been there for a while.

    It suggested a function that grabs the input, modifies it with a marker at the index given by the error message and shows clearly which bit in the input was wrong.

    Could I have done this myself? Yes.

    Would I have bothered, no I have actual features to add at this point.

    Was it useful? Definitely. There was maybe 5 minutes of active work on my part and I got a nice improvement out of it.

    And this wasn't the only instance.

    Even the misunderstanding could've been avoided by me providing the agent better documentation on what everything does and where they are located.