← Back to context

Comment by DavidPiper

9 hours ago

I've just started a new role as a senior SWE after 5 months off. I've been using Claude a bit in my time off; it works really well. But now that I've started using it professionally, I keep running into a specific problem: I have nothing to hold onto in my own mind.

How this plays out:

I use Claude to write some moderately complex code and raise a PR. Someone asks me to change something. I look at the review and think, yeah, that makes sense, I missed that and Claude missed that. The code works, but it's not quite right. I'll make some changes.

Except I can't.

For me, it turns out having decisions made for you and fed to you is not the same as making the decisions and moving the code from your brain to your hands yourself. Certainly every decision made was fine: I reviewed Claude's output, got it to ask questions, answered them, and it got everything right. I reviewed its code before I raised the PR. Everything looked fine within the bounds of my knowledge, and this review was simply something I didn't know about.

But I didn't make any of those decisions. And when I have to come back to the code to make updates - perhaps tomorrow - I have nothing to grab onto in my mind. Nothing is in my own mental cache. I know what decisions were made, but I merely checked them, I didn't decide them. I know where the code was written, but I merely verified it, I didn't write it.

And so I suffer an immediate and extreme slow-down, basically re-doing all of Claude's work in my mind to reach a point where I can make manual changes correctly.

But wait, I could just use Claude for this! But for now I don't, because I've seen this before. Just a few moments ago. Using Claude has just made it significantly slower when I need to use my own knowledge and skills.

I'm still figuring out whether this problem is transient (because this is a brand new system that I don't have years of experience with), or whether it will actually be a hard blocker to me using Claude long-term. Assuming I want to be at my new workplace for many years and be successful, it will cost me a lot in time and knowledge to NOT build the castle in the sky myself.

Then you're using it more towards vibe coding than AI-assisted coding: I use AI to write the stuff the way I want it to be written. I give it information about how to structure files, coding style and the logic flow.

Then I spend time to read each file change and give feedback on things I'd do differently. Vastly saves me time and it's very close or even better than what I would have written.

If the result is something you can't explain than slow down and follow the steps it takes as they are taken.

  • AI assisted coding makes you dumber full stop. It's obvious as soon as you try it for the first time. Need a regex? No need to engage your brain. AI will do that for you. Is what it produced correct? Well who knows? I didn't actually think about it. As current gen seniors brains atrophy over the next few years the scarier thing is that juniors won't even be learning the fundamentals because it is too easy to let AI handle it.

    • Strongly disagree. If the complexity of your work it the software development itself, then it means that your work is not very complex to begin with.

      It has always been extremely annoying to fight with people who mistake the ability of building or engaging with complicated systems (like your regex) with competency.

      I work in building AI for a very complex application, and I used to be in the top 0.1% of Python programmers (by one metric) at my previous FAANG job, and Claude has completely removed any barriers I have between thinking and achieving. I have achieved internal SOTA for my company, alone, in 1 week, doing something that previously would have taken me months of work. Did I have to check that the AI did everything correctly? Sure. But I did that after saving months of implementation time so it was very worth it.

      We're now in the age of being ideas-bound instead of implementation-bound.

      1 reply →

    • I agree. In the beginning when I was starting, I let the AI do all of the work and merely verified that it does what I want, but then I started running into token limits. In the first two weeks I honestly was just looking forward for the limit to refresh. The low effort made it feel like I would be wasting my time writing code without the agent.

      Starting with week three the overall structure of the code base is done, but the actual implementation is lacking. Whenever I run out of tokens I just started programming by hand again. As you keep doing this, the code base becomes ever more familiar to you until you're at a point where you tear down the AI scaffolding in the places where it is lacking and keep it where it makes no difference.

  • I agree that being further along the Vibe end of the spectrum is the issue. Some of the other ways I use Claude don't have the same problems.

    > If the result is something you can't explain than slow down and follow the steps it takes as they are taken.

    The problem is I can explain it. But it's rote and not malleable. I didn't do the work to prove it to myself. Its primary form is on the page, not in my head, as it were.

    • I'm on the same path as you are it seems. I used to be able to explain every single variable name in a PR. I took a lot of pride in the structure of the code and the tests I wrote had strategy and tactics.

      I still wrote bugs. I'd bet that my bugs/LoC has remained static if not decreased with AI usage.

      What I do see is more bugs, because the LoC denominator has increased.

      What I align myself towards is that becoming senior was never about knowing the entire standard library, it was about knowing when to use the standard library. I spent a decade building Taste by butting my head into walls. This new AI thing just requires more Taste. When to point Claude towards a bug report and tell it to auto-merge a PR and when to walk through code-gen function by function.

    • > I can explain it. But it's rote and not malleable.

      The AI can help with that too. Ask it "How would one think about this issue, to prove that what was done here is correct?" and it will come up with somethimg to help you ground that understanding intuitively.

  • It's a spectrum and we don't have clear notches on the ruler letting us know when we're confidently steering the model and when we've wandered into vibe coding. For me, this position is easy to take when I am feeling well and am not feeling pressured to produce in a fixed (and likely short) time frame.

    It also doesn't help that Claude ends every recommendation with "Would you like me to go ahead and do that for you?" Eventually people get tired and it's all to easy to just nod and say "yes".

    • That is indeed a very annoying part of many AI models. I wish I could turn it off.

For me it seems more or less similar to reviewing others' changes to a codebase. In any large organization codebase, most of the changes won't be our own.

This is my primary personal concern. I think it could be an silent psychological landmine going off way too late (sic).

In a living codebase you spent long stretches to learn how it works. It's like reading a book that doesn't match your taste, but you eventually need to understand and edit it, so you push through. That process is extremely valuable, you will get familiar with the codebase, you map it out in your head, you imagine big red alerts on the problematic stuff. Over time you become more and more efficient editing and refactoring the code.

The short term state of AI is pretty much outlined by you. You get a high level bug or task, you rephrase it into proper technical instructions and let a coding agent fill in the code. Yell a few times. Fix problems by hand.

But you are already "detached" from the codebase, you have to learn it the hard way. Each time your agent is too stupid. You are less efficient, at least in this phase. But your overall understanding of the codebase will degrade over time. Once the serious data corruption hits the company, it will take weeks to figure it out.

I think this psychological detachment can potentially play out really bad for the whole industry. If we get stuck for too long in this weird phase, the whole tech talent pool might implode. (Is anyone working on plumbing LLMs?)

  • I believe the detachment gets exacerbated by the fact that others are simultaneously modifying the codebase at speeds that doesn't allow you to keep up. Depending on how the codebase boundaries and ownership are defined, this directly impacts your ability to reason about the whole system and therefore influence direction.

Ask Claude to explain the code in depth for you. It's a language model, it's great at taking in obscure code and writing up explanations of how it works in plain English.

You can do this during the previous change phase of course. Just ask "How would one plan this change to the codebase? Could you explain in depth why?" If you're expected to be thoroughly familiar with that code, it makes no sense to skip that step.

  • This is like asking Claude to explain some aspect of physics to you. It'll 'feel' like you understand, but in order to really understand you have to work those annoying problems.

    Same with anything. You can read about how to meditate, cook, sew, whatever. But if you only read about something, your mental model is hollow and purely conceptual, having never had to interact with actual reality. Your brain has to work through the problems.

    • > ...in order to really understand you have to work those annoying problems.

      GP says that they have to come back tomorrow and edit the code to fix something. That's a verification step: if you can do that (even with some effort) you understand why the AI did what it did. This is not some completely new domain where what you wrote would apply very clearly, it's just a codebase that GP is supposed to be familiar with already!

  • By working in this way you're proactively de-skilling yourself. Do it long enough and you're now replaceable by anyone that can type a prompt.