Comment by augusteo
5 hours ago
The limiting factor at work isn't writing code anymore. It's deciding what to build and catching when things go sideways.
We've been running agent workflows for a while now. The pattern that works: treat agents like junior team members. Clear scope, explicit success criteria, checkpoints to review output. The skills that matter are the same ones that make someone a good manager of people.
pglevy is right that many managers aren't good at this. But that's always been true. The difference now is that the feedback loop is faster. Bad delegation to an agent fails in minutes, not weeks. You learn quickly whether your instructions were clear.
The uncomfortable part: if your value was being the person who could grind through tedious work, that's no longer a moat. Orchestration and judgment are what's left.
I've been saying it this whole time; it's not the engineers who need to be concerned with being replaced - it's anyone involved in the busywork cycle. This includes those who do busywork (grinding through tedium) and those who create it (MBAs, without apologies to the author).
Here's the thing - that feedback loop isn't a magic lamp. Actually understanding why an agent is failing (when it does) takes knowledge of the problem space. Actually guiding that feedback loop so it optimally handles tasks - segmenting work and composing agentic cores to focus on the right things with the right priority of decision making - that's something you need to be curious about the internals for. Engineering, basically.
One thing I've seen in using these models to create code is that they're myopic and shortsighted - they do whatever it takes to fix the problem right in front of them when asked. This causes a cascading failure mode where the code is a patchwork of one-off fixes and hardcoded solutions for problems that not only recur, they get exponentially worse as they compound. You'd only know this if you could spot it when the model says something like "I see the problem, this server configuration is blocking port 80 and that's blocking my test probes. Let me open that port in the firewall".
> it's not the engineers who need to be concerned with being replaced - it's anyone involved in the busywork cycle
This assumes there aren't "engineers" involved in the busywork cycle, which I'm not sure is accurate.
Depends on how you define it, i suppose. It’s also not black and white - everyone has some amount of busywork im sure.
This is verifiably false.
You still need to do most of the grunt work, verifying and organizing the code. it's just you're not editing the code directly. Speed of typing out code is hardly the bottle neck.
The bottleneck is visualizing it and then coming up with a way to figure out bugs or add features.
I've tried a bunch of agents, none of them can reasonably conduct a good architectural change in a medium size codebase.
>medium size codebase
There's a paper that came out in the latter half of last year about this. I wished I kept its name/publisher around, but in synopsis is once you reach a particular amount of complexity in the task you're trying to achieve you'll run out of context to process it, compressing the context still loses enough details that the AI has to reconstitute those details on the next run, again, running out of context.
Currently at least, the task has to have the ability to be broken up in smaller chunks to work properly.
> The skills that matter are the same ones that make someone a good manager of people.
I disagree. Part of being a good manager of (junior) people is teaching them soft skills in addition to technical skills -- how to ask for help and do their own research, and how to build their own skills autonomously, how to think about requirements creatively, etc.
Clear specifications and validating output is only a part of good people management, but is 100% of good agent management.
It’s teaching them in the first place. You can’t teach an LLM. Writing a heap of AGENTS.md is not teaching. LLMs take it as information, but they don’t learn from it in any non-superficial sense.
With https://code.claude.com/docs/en/skills you kinda can teach new things. And also, I have little doubt Anthropic reads these and future AIs might get trained on the most popular recommendations.
Yes, it's a crutch. But maybe the whole NNs that can code and we don't really know why is too.
>You can’t teach an LLM.
Actually you can. Training data, then the way you describe the task, goals, checkpoints, etc is still training.
> The limiting factor at work isn't writing code anymore
Where are yall working that "writing code" was ever the slow part of process
> The uncomfortable part: if your value was being the person who could grind through tedious work, that's no longer a moat. Orchestration and judgment are what's left.
What kind of work do you think people who deal with LLMs everyday are doing? LLMs could maybe take something 60% of the way there. The remaining 40% is horrible tedious work that someone needs to grind through.
Automating part of the grind means that the remaining grind is more fun. You get more payoffs for less work.
Removing all of the fun bits (here is an IDE, let's make something) and some of the grind- but leaving only grind behind- is a worse QoL, at least for me.
> The limiting factor at work isn't writing code anymore
Was it ever? If you don't care about correctness and just want the vibes, then hiring idiots for pennies and telling them to write unlimited code was always an option. Way before "AI" even existed.
And I mean pennies literally. Hell, people will do it for free. Just explain upfront that you only care that the code technically works.
>then hiring idiots for pennies and telling them to write unlimited code was always an option.
OMG, I see you also deal with ______ Bank.
What I have seen in enterprise organizations is enough to turn a man pale and send him to an early grave.
> deciding what to build and catching when things go sideways
I feel like this was always true. Business still moves at the speed of high-level decisions.
> The uncomfortable part: if your value was being the person who could grind through tedious work, that's no longer a moat.
Even when junior devs were copy-pasting from stackoverflow over a decade ago they still had to be accountable for what they did. AI is ultimately a search tool, not a solution builder. We will continue to need junior devs. All devs regardless of experience level still have to push back when requirements are missing or poorly defined. How is picking up this slack and needing to constantly follow up and hold people's hands not "grinding through tedious work"?
AI didn't change anything other than how you find code. I guess it's nice that less technical people can now find it using their plain english ramblings instead of needing to know better keywords? AI has arguably made these search results worse, the need for good docs and examples even more important, and we've all seen how vibecoding goes off the rails.
The best code is still the least you can get away with. The skill devs get paid for has always been making the best choices for the use case, and that's way harder than just "writing code".
>> The limiting factor at work isn't writing code anymore. It's deciding what to build and catching when things go sideways.
Actually I disagree. I've been experimenting with AI a lot, and the limiting factor is marketing. You can build things as fast as you want, but without a reliable and repeatable (and at least somewhat automated) marketing system, you won't get far. This is especially because all marketing channels are flooded with user-generated content (UGS) that is generated by AI.
Recently, I came across Erich Fromm's distinction between "being mode" and "having mode" (AI really explained it the best, would paste it here but it's somewhat long). You're, in contrast with parent post, looking at it from the "having mode" - how to sell the "product" to someone.
But you can also think what would you want to build (for yourself or someone you know), that would otherwise take a team of people. Coding what used to be a professional app can now be a short hobby project.
I played with Claude Code Pro only a short while, but I already believe the mode of production of SW will change to be more accessible to individuals (pro or amateur). It will be similar to death of music labels.
>nd the limiting factor is marketing.
Depends if you're talking about new client acquisition or expansion of existing products in order to assure your client doesn't leave.
The issue I see with this, at least in enterprise, is while we may fix some smaller plates of spaghetti, we're busy building massive tangled pasta apps that do even more.
"writing code" was never the limiting factor and if it was you shouldn't be a developer
Translation: Assimilate or die.
Patently shocked to find this on profile:
> I lead AI & Engineering at Boon AI (Startup building AI for Construction).