Comment by Niko901ch
12 hours ago
AI coding tools are making this problem worse in a subtle way. When an agent can generate a "scalable event-driven architecture" in 5 minutes, the build cost of complexity drops to near zero. But the maintenance cost doesn't.
So now you get Engineer B's output even faster, with even more impressive-sounding abstractions, and the promotion packet writes itself in minutes too. Meanwhile the actual cost - debugging, onboarding, incident response at 3am - stays exactly the same or gets worse, because now nobody fully understands what was generated.
The real test for simplicity has always been: can the next person who touches this code understand it without asking you? AI-generated complexity fails that test spectacularly.
> now nobody fully understands what was generated
To be fair, a lot of the on call people being pulled in at 3am before LLMs existed didn't understand the systems they were supporting very well, either. This will definitely make it worse, though.
I think part of charting a safe career path now involves evaluating how strong any given org's culture of understanding the code and stack is. I definitely do not ever want to be in a position again where no one in the whole place knows how something works while the higher-ups are having a meltdown because something critical broke.
People on call will use AI as well. As long as the first AI left enough documentation and implemented traceability, the diagnosing AI should have an easier time proposing a fix. Ideally, AI would prepare the PR or rollback plan. In a utopia, AI would execute it and recover the system until a human wakes up.
Or at least there is something to chat with about the issue at 3am.
> In a utopia, AI would execute it and recover the system until a human wakes up.
in that utopia, the on-call guy doesn't have a job
This. I've been a sysadmin for a quarter of a century and have professionally written next to no software. I've debugged every system I've had to support at some point though. It's a very different skill set.
True, but I think the implication (as I read it) is that AI may be providing more complex solutions than were needed for the problem and perhaps more complex than a human engineer would have provided.
it's MUCH worse now, not just because of the massive amount of code generated with zero supervision or very little supervision, but also because of the speed at which the systems grow in function
I think we'll see a decline of software as a product for this reason. If your job is to solve a problem, and you use AI to generate a tool that solves that problem, or you use money to buy a tool that solves that problem, well then it's still your job to solve that problem regardless of which tool you use.
But given how poorly bought software tends to fit the use case of the person it was bought for... eventually generate-something-custom will start making more and more sense.
If you end up generating something that nobody understands, then when you quit and get a new job, somebody else will probably use your project as context for generating something that suits the way they want to solve that problem. Time will have passed, so the needs will have changed, they'll end up with something different. They'll also only partially understand it, but the gaps will be in different places this time around. Overall I think it'll be an improvement because there will be less distance (both in time and along the social graph) between the software's user its creator--them being most of the time the same person.
On the other hand, AI coding tools make it relatively easy to set and apply policies that can help with this sort of thing.
I like to have something like the following in AGENTS.md:
## Guiding Principles - Optimise for long-term maintainability - KISS - YAGNI
I wrote something similar in a Claude Code instructions.md: "minimize cyclomatic complexity" What happened next? It generated an 8 line wrapper function called only once from a different file. So, I told it to inline that logic in the caller. The result? One. Line. Of. Code.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
instruction.md is the new intern.
It reminds me a lot of people who take Code Complete too seriously. "Common sense" is not an objective or universal statement unfortunately - plus, speaking for myself, what I consider "common sense" can change on the daily, which is why I can't be trusted adding features to my own codebase long term <_<.
Not sure if you're kidding or not, but to write great maintable code, you need a lot of understanding that a LLM just doesn't have, like history, business context, company culture etc. Also, I doubt that in it's training data it has a lot of good examples of great maintainable code to pull from.
Neither do most humans writing such code, i have seen llms generate better code than 90% of coders I have seen in the last 20 years
4 replies →
He isn't kidding. I have a directive to write the shortest, least complicated, readable business code and it makes a huge difference
1 reply →
"Be sure to remember software is a sociotechnical system and dont fall prey to the Mechanistic myth"
I wrote something similar in a Claude Code instructions.md: "minimize cyclomatic complexity" What happened next? It generated an 8 line wrapper function called only once from a different file. So, I told it to inline that logic in the caller. The result? One. Line. Of. Code.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
instructions.md is the new intern.
Maybe a better way to handle "minimize cyclomatic complexity" would be to set an agent in a loop of code metrics, refactor, test and repeat.
6 replies →
This is something I keep thinking while coding with AI, and same with introducing library dependencies for the simplest problems. It’s not whether how quickly I can get there but more about how can I keep it simple to maintain not only for myself but for the next AI agent.
Biggest problem is that next person is me 6 months later :) but even when it’s not a next person problem how much of the design I can just keep in my mind at a given time, ironically AI has the exact same problem aka context window
Well in 6 months it you and a 6 months smarter LLM.
There's also the operational cost of running whatever is churned out. I wouldn't exactly blame that on AIs, but a large contingency of developers optimize for popular tech-stacks and not ease of operations. I don't think that will change just because they start using AI. In my experience the AI won't tell you that you're massively overbuilding something or that if we did this in C and used Postgresql we'd be able to run this on an old Pentium III with 4GB of RAM. If you want Kubernetes and ElasticSearch, you'll get exactly that.
> When an agent can generate a "scalable event-driven architecture" in 5 minutes
Currently they can't. Anyone with a basic understand of sw engineering will find numerous issues with the result of such a prompt within minutes.
But the product manager who asked for it won't realize that.
Unless you hire good TPMs!
Was this comment written by an LLM? It has a lot of the tell-tale signals, and pangram gives it a 100% chance of being written by AI.
It's a bad time to be an altruistic perfectionist, tell you what.
Avoid hands-on tech/team lead positions like hell.
that second line is so underrated
It’s not even about perfectionism. Code’s value is about processing data. Bad code do it wrongly and if you have strange code on top of that, you cannot correct the course. Happy path are usually the low hanging fruits. What makes developing software hard is thinking about all the failure and edge cases.
Thats the kind of thinking that got us into this mess... scab
AI generators only generate that if you tell them to though - as a developer (especially senior) it's your job to know what you want and tell the AI coding tools that.
The "coding benchmarks" should be based heavily biased on what dependencies they use etc. Reduced characters etc.
The flip side of this is that languages who have a major selling point of maintainability just had their value increase dramatically.
Simplicity is a driver for better abstractions. But now with AI, will we even develop new abstractions?
I was in charge of cleaning up a slop codebase by someone who has barely even heard of 'coding' before. Let's just say, it was abstract.
There's different kinds of abstraction. There's abstraction like Jackson Pollock and there's abstraction like what Dijkstra as suggesting: elegance. Which personally made the article a very weird read to me
tbh codebases like that predate AI code generators. I had one job where my predecessor was not a very good developer by modern standards, but he was productive... a dangerous combination.
1 reply →
What is the maintenance cost when Opus 6 or whatever is available?
Agree but I wouldn't say it's subtle, the slop builds up quickly