Comment by jb3689
3 days ago
100% agree. I am interested in seeing how this will change how I work. I'm finding that I'm now more concerned with how I can keep the AI busy and how I can keep the quality of outputs high. I believe it has a lot to do with how my projects are structured and documented. There are also some menial issues (e.g. structuring projects to avoid merge conflicts becoming bottlenecks)
I expect that in a year my relationship with AI will be more like a TL working mostly at the requirements and task definition layer managing the work of several agents across parallel workstreams. I expect new development toolchains to start reflecting this too with less emphasis on IDEs and more emphasis on efficient task and project management.
I think the "missed growth" of junior devs is overblown though. Did the widespread adoption of higher-level really hurt the careers of developers missing out on the days when we had to do explicit memory management? We're just shifting the skillset and removing the unnecessary overhead. We could argue endlessly about technical depth being important, but in my experience this hasn't ever been truly necessary to succeed in your career. We'll mitigate these issues the same way we do with higher-level languages - by first focusing on the properties and invariants of the solutions outside-in.
An important skill for software developers is the ability to reason about what the effects of their coce will be, over all possible conditions and inputs, as opposed to trial and error limited to specific inputs, or (as is the case with non-deterministic LLMs) limited to single executions. This skill is independent of whether you are coding in assembly or are using higher-level languages and tooling. Using LLMs exactly doesn’t train that skill, because the effective unpredictability of their results largely prevents any but the most vague logical reasoning about the connection between the prompt and the specific output.
> Using LLMs exactly doesn’t train that skill
I actually think this is one skill LLMs _do_ train, albeit for an entirely different reason. Claude is fairly bad at considering edge cases in my experience, so I generally have to prompt for them specifically.
Even for entirely “vibe-coded” apps I could theoretically have created without knowing any programming syntax, I was successful only because I knew about possible edge cases.