← Back to context

Comment by heartbreak

13 hours ago

It remains unclear to me why my ability to read and review code (the majority of my job for years now) will atrophy if I continue doing it while writing even less code than I was before.

If my ability to write code somehow atrophies because I stop doing it, does that matter if I continue with the architecture and strategy around coding?

The act of writing code by hand seems to be on a trajectory of irrelevance, so as long as I maintain my ability to reason about code (both by continuing to read it and instruct tools to write it), what’s the issue?

Edit to add: the vast majority of the code I’ve worked on in my career was not written by me. A significant portion of it was not written by someone still employed by my employer. I think that’s true for a lot of us, and we all made it work. And we made it work without modern coding assistants helping out. I think we’ll be fine.

"so as long as I maintain my ability to reason about code…what’s the issue?"

It seems like that is the open question. The article suggests that people don't maintain this ability:

"The AI group scored 17% lower on conceptual understanding, debugging, and code reading. The largest gap was in debugging, the exact skill you need to catch what AI gets wrong. One hour of passive AI-assisted work produced measurable skill erosion."

From my own (anecdotal) experience I am seeing a lot more cases of what I call developer bullshit where developers can't even talk about the work they are vibe-coding on in a coherent way. Management doesn't notice this since it's all techno-bable to them and sounds fancy, but other developers do.

  • This use to be the most embarrassing thing that could happen. A team member asks you why you did something a certain way during a PR and you can't provide an answer. This seems to be becoming the norm now.

    • It also used to be an indicator that potentially someone was outsourcing their work overseas.

      Edit: I had an instance once where about once a month another developer would ask me about workplace setup, mentioned it to someone and was told maybe they were the English speaker of the group. Upon further investigation, that seemed to be the case.

  • What happens when more and more people cannot explain their PRs ? I mean they already use AI to create the "explanation" as well and ping you. Ask them questions and they will delegate again to AI and copy-paste what the AI answers.

  • The problem is that that is an incorrect interpretation of the study. The entire task of that study was specifically to learn a brand new asynchronous library that they hadn't had experience with before. As a group on average, those who used AI failed to learn how to use explain and debug that async library as well as those who hadn't used AI on average had, but that doesn't mean they lost pre-existing skills. It's literally in the study title: "skill formation", not skill practice, maintenance, or deterioration.

    I think it's also extremely worth pointing out that when you break down the AI using group by how they actually used AI, those who had the AI both provide code and afterwards provide a summary of the concepts and what it did actually scored among the highest. The same for ones who actually use the AI to ask it questions about the code it generated after it generated that code. Which seems to indicate to me that as long as you're having the AI explain and summarize what it did after each badge of edits. And you're also using it to explore and explain existing code bases. You're not going to see this problem.

    I'm so extremely tired of people like you who want to engage in this moral panic completely misinterpreting these studies

    • Point taken. Still, isn’t an activity like learning a new library, language, or platform a fundamental part of being a software developer? Haven’t we all complained at some point about companies hiring react developers because we all know the real skill is the ability to pick up new things. And to be clear, this isn’t moral panic, it’s a concern that we may end up in a future where people don’t know how systems work anymore and we are dependent on two or three companies and their data center moats to maintain any technology.

      1 reply →

  • Per your last paragraph, I also think we are in an awkward middle period where developers are embarrassed to admit how much code is vibes with very little review before they submit.

    The embarrassment is understanding. It feels wrong, because in many ways it is wrong.

    The only way I’ve had this feel any better is by using it on a non-critical internal tool. I can confidently say “I didn’t write any of this code because it’s a quality of life tool that only lives on developer manners and is not required at any point in our workflow.”

    I also agree with the article that, unless computer science departments maintain some pretty strict discipline, this idea of a seniority collapse could be very real.

    Will we need those senior engineers if AI keeps getting better? I don’t know. Maybe one day the AI systems are going to just be trusted to be able to untangle complex architectural problems.

    If it wasn’t for leaded gasoline, rudimentary cancer treatment, and a good section of my modern video game catalog. I might be wishing I was born earlier.

  > The act of writing code by hand seems to be on a trajectory of irrelevance

It does not. English (or any human language) is an awful language to write specifications in, because it is not as precise as code. Each time you "compile" your prompt into a program, LLMs spit up something a little bit different. How is it a good thing?

  > so as long as I maintain my ability to reason about code (both by continuing to read it and instruct tools to write it), what’s the issue?

The post mentions this. You need to write code yourself to keep your review skill (know what's good and what's bad) sharp. You think why if you want to learn something, you better get a paper, a pen and write notes, by hand, like in those ancient times? You would think we're in 2026, you can grab an ipad, watch some videos and become an expert? No. You need to have your hands dirty. By writing some damn code.

  • > Each time you "compile" your prompt into a program, LLMs spit up something a little bit different. How is it a good thing?

    Because that’s not how it works. How can we have a discussion about this topic if we don’t have a mutual understanding of how the tools even work?

    The code is not replaced by English prompts. The code still exists.

    • Yes, it exists, but are you going to edit it by hand if you didn't write it in the first place, or you would rather throw another prompt to update it? People tend to do the latter, thinking about the code as some generated artifact, like object files.

    • > The code is not replaced by English prompts. The code still exists

      If you can guarantee that it does what you say it does, then all is ok. The core issue since the advent of ChatGPT was always this reliability issue, whether the end result, the code, addresses the change request issued.

      It turned out that you need to be an expert programmer to vet the code as well as supervise its evolution, whatever the tool used to write it.

Even before AI, I’ve witnessed at Google plenty of L6 and L7 software engineers atrophy. They stop writing code, start reviewing code, until they find that their code reviews catch fewer issues than a junior engineer’s reviews. They have become accustomed to thinking only at a high-level, and when met with low-level details they can’t tell good from bad any more. Their coding skills, both reading and writing, have atrophied.

  • Do they also stop providing value to Google as a result?

    I don’t get paid to write code, and you probably don’t either.

    • > Do they also stop providing value to Google as a result?

      In the context of a software engineer, yes obviously?

      > I don’t get paid to write code, and you probably don’t either.

      I feel like you're rejecting the premise of the argument. You're talking about becoming a manager, as if that track is somehow relevant to software engineers. I used to be a nurse, I'm not anymore. My skills have definitely atrophied. I would now be a shitty and dangerous nurse. How does that apply to my skills at software engineering? When you stop being a software engineer, it's expected your skills at interacting with code will fall away. But the article you're arguing against isn't written for nurses, and equally isn't written engineering managers.

How do we maintain best practices when the compiler outputs a different result for the spec at any given time? How do we obtain reproducible builds? Do we pin to a specific version of our compiler (ie, snapshot of the model; is this possible anywhere except local currently?), and vigorously test changes after any updates in our "toolchain"? How do we have control over our "toolchain" (again, apart from local), especially when said "toolchain" can, for all its users simultaneously, fold to political pressure from state regimes? And, if the code generated by LLMs is the build artifact, why is it now okay to check the build artifact into source control?

There may come a day when we, as an industry, decide that simply doing it by hand is more expedient when it comes to resolving urgent production issues. We may not know the pain we are causing ourselves until well into the future when it has become too much to bear without a visit to the proverbial doctor.