Comment by xandrius
9 hours ago
Then you're using it more towards vibe coding than AI-assisted coding: I use AI to write the stuff the way I want it to be written. I give it information about how to structure files, coding style and the logic flow.
Then I spend time to read each file change and give feedback on things I'd do differently. Vastly saves me time and it's very close or even better than what I would have written.
If the result is something you can't explain than slow down and follow the steps it takes as they are taken.
AI assisted coding makes you dumber full stop. It's obvious as soon as you try it for the first time. Need a regex? No need to engage your brain. AI will do that for you. Is what it produced correct? Well who knows? I didn't actually think about it. As current gen seniors brains atrophy over the next few years the scarier thing is that juniors won't even be learning the fundamentals because it is too easy to let AI handle it.
Strongly disagree. If the complexity of your work it the software development itself, then it means that your work is not very complex to begin with.
It has always been extremely annoying to fight with people who mistake the ability of building or engaging with complicated systems (like your regex) with competency.
I work in building AI for a very complex application, and I used to be in the top 0.1% of Python programmers (by one metric) at my previous FAANG job, and Claude has completely removed any barriers I have between thinking and achieving. I have achieved internal SOTA for my company, alone, in 1 week, doing something that previously would have taken me months of work. Did I have to check that the AI did everything correctly? Sure. But I did that after saving months of implementation time so it was very worth it.
We're now in the age of being ideas-bound instead of implementation-bound.
What was the metric?
I agree. In the beginning when I was starting, I let the AI do all of the work and merely verified that it does what I want, but then I started running into token limits. In the first two weeks I honestly was just looking forward for the limit to refresh. The low effort made it feel like I would be wasting my time writing code without the agent.
Starting with week three the overall structure of the code base is done, but the actual implementation is lacking. Whenever I run out of tokens I just started programming by hand again. As you keep doing this, the code base becomes ever more familiar to you until you're at a point where you tear down the AI scaffolding in the places where it is lacking and keep it where it makes no difference.
I agree that being further along the Vibe end of the spectrum is the issue. Some of the other ways I use Claude don't have the same problems.
> If the result is something you can't explain than slow down and follow the steps it takes as they are taken.
The problem is I can explain it. But it's rote and not malleable. I didn't do the work to prove it to myself. Its primary form is on the page, not in my head, as it were.
I'm on the same path as you are it seems. I used to be able to explain every single variable name in a PR. I took a lot of pride in the structure of the code and the tests I wrote had strategy and tactics.
I still wrote bugs. I'd bet that my bugs/LoC has remained static if not decreased with AI usage.
What I do see is more bugs, because the LoC denominator has increased.
What I align myself towards is that becoming senior was never about knowing the entire standard library, it was about knowing when to use the standard library. I spent a decade building Taste by butting my head into walls. This new AI thing just requires more Taste. When to point Claude towards a bug report and tell it to auto-merge a PR and when to walk through code-gen function by function.
> I can explain it. But it's rote and not malleable.
The AI can help with that too. Ask it "How would one think about this issue, to prove that what was done here is correct?" and it will come up with somethimg to help you ground that understanding intuitively.
It's a spectrum and we don't have clear notches on the ruler letting us know when we're confidently steering the model and when we've wandered into vibe coding. For me, this position is easy to take when I am feeling well and am not feeling pressured to produce in a fixed (and likely short) time frame.
It also doesn't help that Claude ends every recommendation with "Would you like me to go ahead and do that for you?" Eventually people get tired and it's all to easy to just nod and say "yes".
That is indeed a very annoying part of many AI models. I wish I could turn it off.