I wonder at the end of this if it's the still worth the risk?
A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Maybe it's the kind of work I'm doing, or maybe I just suck, but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
I second this. This* is the matter against which we form understanding. This here is the work at hand, our own notes, discussions we have with people, the silent walk where our brain kinda process errors and ideas .. it's always been like this since i was a kid, playing with construction toys. I never ever wanted somebody to play while I wait to evaluate if it fits my desires. Desires that often come from playing.
Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.
Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
I'll push it back against this a little bit. I find any type of deliberative thinking to be a forcing function. I've recently been experimenting with writing very detailed specifications and prompts for an LLM to process. I find that as I go through the details, thoughts will occur to me. Things I hadn't thought about in the design will come to me. This is very much the same phenomenon when I was writing the code by hand. I don't think this is a binary either or. There are many ways to have a forcing function.
My think/create/solve focus is on making my agentic coding environment produce high quality code with the least cost. Seems like a technical challenge worth playing with.
It probably helps that I have 40 years of experience with producing code the old ways, including using punch cards in middle school and learning basic on a computer with no persistent storage when I was ten.
I think I've done enough time in the trenches and deserve to play with coding agents without shame.
Actually for me it was the opposite: before I wasn't able to play around and experiment in my free time that much, because I didn't have enough energy left to actualize the thoughts and ideas I have since I have a day job.
Now, since the bottleneck of moving the fingers to write code has gone down, I actually started to enjoy doing side projects. The mental stress from writing code has gone down drastically with Claude Code, and I feel the urge to create more nowadays!
Everything you have said here is completely true, except for "not in that group": the cost-benefit analysis clearly favors letting these tools rip, even despite the drawbacks.
I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.
I think we really need to have a serious think of what is "good quality" in the age of coding agents. A lot of the effort we put into maintaining quality has to do with maintainability, readability etc. But is it relevant if the code isn't for humans? What is good for a human is not what is good for an AI necessarily (not to say there is no overlap). I think there are clearly measurable things we can agree still apply around bugs, security etc, but I think there are also going to be some things we need to just let go of.
> I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Also we live in a capitalist society. The boss will soon ask: "Why the fuck am I paying you to sip a latte in a bar? While am machine does your work? Use all your time to make money for me, or you're fired."
AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.
I still do this, but when I'm reviewing what's been written and / or testing what's been built.
How I see it is we've reverted back to a heavier spec type approach, however the turn around time is so fast with agents that it still can feel very iterative simply because the cost of bailing on an approach is so minimal. I treat the spec (and tests when applicable) as the real work now. I front load as much as I can into the spec, but I also iterate constantly. I often completely bail on a feature or the overall approach to a feature as I discover (with the agent) that I'm just not happy with the gotchas that come to light.
AI agents to me are a tool. An accelerator. I think there are people who've figured out a more vibey approach that works for them, but for now at least, my approach is to review and think about everything we're producing, which forms my thoughts as we go.
>but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
This is so on point. The spec as code people try again and again. But reality always punches holes in their spec.
A spec that wasn't exercised in code, is like a drawing of a car, no matter how detailed that drawing is, you can't drive it, and it hides 90% of the complexity.
To me the value of LLMs is not so much in the code they write. They're usually to verbose, start building weird things when you don't constantly micromanage them.
But you can ask very broad questions, iteratively refine the answer, critique what you don't like. They're good as a sounding board.
I love using LLMs as well as rubber ducks - what does this piece of code do? How would you do X with Y? etc.
The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.
Historically software engineering has been seen as "assembly line" work by a lot of people (see all the efforts to outsource it through spec handoffs and waterfall through the years) but been implemented in practice as design-as-you-build (nobody anticipates all the questions or edge cases in advance, software specs are often an order of magnitude simpler than the actual number of branches in the code).
For mission-critical applications I wonder if making "writing the actual code" so much cheaper means that it would make more sense to do more formal design up front instead, when you no longer have a human directly in the loop during the writing of the code to think about those nasty pops-up-on-the-fly decisions.
> software specs are often an order of magnitude simpler than the actual number of branches in the code
Love this! Be it design specs or a mock from the designer. So many unaccounted for decisions. Good devs will solve many on their own, uplevel when needed, and provide options.
And absolutely it means more design up front. And without human in the direct loop, maybe people won’t skimp on this!
I also second this. I find that I write better by hand, although I work on niche applications it’s not really standard crud or react apps. I use LLMs in the same way i used to used stack overflow, if I go much farther to automate my work than that I spend more time on cleanup compared to if I just write code myself.
Sometimes the AI does weird stuff too. I wrote a texture projection for a nonstandard geometric primitive, the projection used some math that was valid only for local regions… long story. Claude kept on wanting to rewrite the function to what it thought was correct (it was not) even when I directed to non related tasks. Super annoying. I ended up wrapping the function in comments telling it to f#=% off before it would leave it alone.
> I use LLMs in the same way i used to used stack overflow, if I go much farther to automate my work than that I spend more time on cleanup compared to if I just write code myself.
yea, same here.
i've asked an ai to plan and setup some larger non straight forwards changes/features/refactorings but it usually devolves into burning tokens and me clicking the 'allow' button and re-clarifying over and over when it keeps trying to confirm the build works etc...
when i'm stuck though, or when im curious of some solution it usually opens the way to finish the work similar to stack overflow
Using AI or writing your own code isn't an xor thing. You can still write the code but have a coding assistant or something an alt/cmd-tab away. I enjoy writing code, it relaxes me so that's what I do but when I need to look something up or i'm not clear on the syntax for some particular operation instead of tabbing to a browser and google.com I tab to the agent and ask it to take a look. For me, this is especially helpful for CSS and UI because I really suck at and dislike that part of development.
I also use these things to just plan out an approach. You can use plan mode for yourself to get an idea of the steps required and then ask the agent to write it to a file. Pull up the file and then go do it yourself.
In 1987 when I first started coding, I would either write my first attempt in BASIC and see it was too slow and rewrite parts in assembly or I would know that I had to write what I wanted from the get go in assembly because the functionality wasn’t exposed at all in BASIC (using the second 64K of memory or using double hires graphics).
This past week, I spent a couple of days modifying a web solution written by someone else + converting it from a Terraform based deployment to CloudFormation using Codex - without looking at the code as someone who hasn’t done front in development in a decade - I verified the functionality.
More relevantly but related, I spent a couple of hours thinking through an architecture - cloud + an Amazon managed service + infrastructure as code + actual coding, diagramming it, labeling it , and thinking about the breakdown and phases to get it done. I put all of the requirements - that I would have done anyway - into a markdown file and told Claude and Codex to mark off items as I tested each item and summarize what it did.
Looking at the amount of work, between modifying the web front end and the new work, it would have taken two weeks with another developer helping me before AI based coding. It took me three or four days by myself.
The real kicker though is while it worked as expected for a couple of hundred documents, it fell completely to its knees when I threw 20x documents into the system. Before LLMs, this would have made me look completely incompetent telling the customer I now wasted two weeks worth of time and 2 other resources.
Now, I just went back to the literal drawing board, rearchitected it, did all of the things with code that the managed services abstracted away with a few tweaks, created a new mark down file and was done in a day. That rework would have taken me a week by itself. I knew the theory behind what the managed service was doing. But in practice I had never done it.
It’s been over a decade where I was responsable for a delivery that I could do by myself without delegating to other people or that was simple enough that I wouldn’t start with a design document for my own benefit. Now within the past year, I can take on larger projects by myself without the coordination/“mythical man Month” overhead.
I can also in a moment of exasperation say to Codex “what you did was an over complicated stupid mess, rethink your implementation from first principles” without getting reported to HR.
There is also a lot of nice to have gold plating that I will do now knowing that it will be a lot faster
That's because many developers are used to working like this.
With AI, the correct approach is to think more like a software architect.
Learning to plan things out in your head upfront without to figure things out while coding requires a mindset shift, but is important to work effectively with the new tools.
To some this comes naturally, for others it is very hard.
I dont think any complex plan should be planned in your head. But drawing diagrams, sketching components, listing pros and cons, 100%. Not jumping directly into coding might look more like jumping into spec writing a poc
I think what GP is referring too are technical semantics and accidental complexity. You can’t plan for those.
The same kind of planning you’re describing can and do happen sans LLM, usually on the sofa, or in front of a whiteboard. Or by reading some research materials. No good programmer rushes to coding without a clear objective.
But the map is not the territory. A lot of questions surface during coding. LLMs will guess and the result may be correct according to the plan, but technically poor, unreliable, or downright insecure.
Some people like to lay the brick, some people like to draw the blueprints. I don’t think there is anything wrong with not subscribing to this onslaught on AI tooling, doing the hard work is rewarding. Whether AI will become a standard in how code is written in the future is still to be determined and I think there is a real chance that is where it goes, it shouldn’t hinder your love for doing what you do.
100%. To me the real question is whether all the bother getting the agents to not waste time nets out to real gains, or perceived gains (while possibly even losing efficiency).
It's not at all clear to me which is true given the level of hype and antipathy out there. I'm just going to watch and wait, and experiment cautiously, till it's more clearcut.
I think of it differently: I’ve been coding so long that ironing out the details and working through the specification with AI comes extremely naturally. It’s like how I would talk to a colleague and iterate on their work.
However, the quality of the code produced by LLMs needs to be carefully managed to assure it’s of a high standard. That’s why I formalized a system of checks and balances for my genetic coding that contains architectural guidelines as well as language, specific taste advice.
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
If you need that, don't use AI for it. What is it that you don't enjoy coding or think it's tangential to your thinking process? Maybe while you focus on the code have an agent build a testing pipeline, or deal with other parts of the system that is not very ergonomic or need some cleanup.
this is the right answer, but many companies mandate to use ai (burn x tokens and y percent of code) now, so people are bound to use it where it might not fit
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Two principles I have held for many years which I believe are relevant both to your sentiment and this thread are reproduced below. Hopefully they help.
First:
When making software, remember that it is a snapshot of
your understanding of the problem. It states to all,
including your future-self, your approach, clarity, and
appropriateness of the solution for the problem at hand.
Choose your statements wisely.
And:
Code answers what it does, how it does it, when it is used,
and who uses it. What it cannot answer is why it exists.
Comments accomplish this. If a developer cannot be bothered
with answering why the code exists, why bother to work with
them?
To your first point - so are my many markdown files that I tell Codex/Claude to keep updated while I’m doing my work including telling them to keep them updated with why I told them to do certain things. They have detailed documentation of my initial design goals and decisions that I wrote myself.
Actually those same markdown files answer the second question.
> If a developer cannot be bothered with answering why the code exists, why bother to work with them?
Most people can't answer why they themselves exist, or justify why they are taking up resources rather than eating a bullet and relinquishing their body-matter.
According to the philosophy herein, they are therefore worthless and not worth interacting with, right?
I liken it to manual versus automated industrial production. I think manual coding will always have its place just like how there are even still people who craft things by manual labor, whether it’s woodworkers only using manual tools or blacksmiths who still manually stoke coke fires that produce very unique and custom products; vs the highly automated production lines we have that produce acceptable forms of something efficiently, and many of them so many people can have them.
This is exactly the issue I’m facing especially when working with AI-generated codebases.
Coding is significantly faster but my understanding of the system takes a lot longer because I’m having to merge my mental model with what was produced.
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
I completely agree but my thought went to how we are supposed to estimate work just like that. Or worse, planning poker where I'm supposed to estimate work someone else does.
i go back and forth on this. when i'm working on something where the hard part is the actual algorithm, say custom scheduling logic or a non-trivial state machine, i need my hands in the code because the implementation is the thinking. but for anything where the complexity is in integration rather than logic, wiring up OAuth flows, writing CRUD endpoints, setting up CI pipelines, agents save me hours and the output is usually fine after one review pass. the "code as thought" argument is real but it applies to maybe 20% of what most of us ship day to day. the other 80% is plumbing where the bottleneck is knowing what to build, not how.
I sometimes wonder if the economics of AI coding agents only work if you totally ignore all the positive externalities that come with writing code.
Is the entire AI bubble just the result of taking performance metrics like "lines of code written per day" to their logical extreme?
Software quality and productivity have always been notoriously difficult to measure. That problem never really got solved in a way that allowed non technical management to make really good decisions from the spreadsheet level of abstraction... but those are the same people driving adoption of all these AI tools.
Engineers sometimes do their jobs in spite of poor incentives, but we are eliminating that as an economic inefficiency.
I dunno. On the one hand, I keep hearing anecdata, including hackernews comments, friends, and coworkers, suggesting that AI-assisted coding is a literal game changer in terms of productivity, and if you call yourself a professional you'd better damn well lock the fuck in and learn the tools. At the extreme end this takes the form of, you're not a real engineer unless you use AI because real engineering is about using the optimal means to solve problems within time, scale, and budget constraints, and writing code by hand is now objectively suboptimal.
On the other hand, every time the matter is seriously empirically studied, it turns out that overall:
* productivity gains are very modest, if not negative
* there are considerable drawbacks, including most notably the brainrot effect
Furthermore, AI spend is NOT delivering the promised returns to the extent that we are now seeing reversals in the fortunes of AI stocks, up to and including freakin' NVIDIA, as customers cool on what's being offered.
So I'm supposed to be an empiricist about this, and yet I'm supposed to switch on the word of a "cool story bro" about how some guy built an app or added a feature the other day that he totally swears would have taken him weeks otherwise?
I'm like you. I use code as a part of my thought process for how to solve a problem. It's a notation for thought, much like mathematical or musical notation, not just an end product. "Programs must be written for people to read, and only incidentally for machines to execute." I've actually come to love documenting what I intend to do as I do it, esp. in the form of literate programming. It's like context engineering the intelligence I've got upstairs. Helps the old ADHD brain stay locked in on what needs to be done and why. Org-mode has been extremely helpful in general for collecting my scatterbrained thoughts. But when I want to experiment or prove out a new technique, I lean on working directly with code an awful lot.
I was just thinking this the other day after I did a coding screen and didn't do well. I know the script for the interviewee is your not suppsed to write any code until you talk through the whole thing, but I think i woukd have done better if I could have just wrote a bunch of throw away code to iterate on.
Are there still people under the impression that the correct way to use Stack Overflow all these years was to copy & paste without analyzing what the code did and making it fit for purpose?
If I have to say, we're just waiting for the AI concern caucus to get tired of performing for each other and justifying each other's inaction in other facets of their lives.
The post touches very briefly on linting in 7. For me, setting up a large number of static code analysis checks has had the highest impact on code quality.
My hierarchy of static analysis looks like this (hierarchy below is Typescript focused but in principle translatable to other languages):
9. Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
10. Security check (semgrep)
I stitch all the above in a single `pnpm check` command and defined an agent rule to run this before marking task as complete.
Finally, I make sure `pnpm check` is run as part of a pre-commit hook to make sure that the agent has indeed addressed all the issues.
This makes a dramatic improvement in code quality to the point where I'm able to jump in and manually modify the code easily when the LLM slot machine gets stuck every now and then.
(Edit: added mention of pre-commit hook which I missed mention of in initial comment)
this is close to what i've landed on too. the pre-commit hook is non-negotiable. i've had Claude Code report "all checks pass" when there were 14 failing eslint rules. beyond the static analysis though, i keep hitting a harder problem: code that passes every lint rule, compiles clean, and greens the test suite but implements a subtly wrong interpretation of the spec. like an API handler that returns 200 with an empty array instead of 404, technically valid but semantically wrong. evaluating behavioural correctness against intent, not just syntax or type safety, is the gap nobody's really cracked yet. property-based testing helps but it still requires you to formalize the invariants upfront, which is often the hard part.
Not a catch all to fix issues agree with linting. Being very strict with linters has become very cheap with coding agents and it keeps you up to date with code standards and keeps code style homogenous which is very nice when you are reviewing professional code, regardless of who wrote it.
It’s also tricky otherwise if you have to occasionally review lazily written manual code mixed with syntactically formal/clean but functionally incorrect AI code.
I use a pre-commit hook to run `pnpm check`. I missed mentioning it in the original comment. Your reply reminded me of it and I have now added it. Thanks.
These kinda things aren’t really the issues I run into. Lack of clarity of thought, overly verbose code, needlessly defensive programming - the stuff that really rots a codebase. Honestly some of the above rules you have I’d want the LLM to ignore at the times if we’re going for maximum maintainability.
The real value that AI provides is the speed at which it works, and its almost human-like ability to “get it” and reasonably handle ambiguity. Almost like tasking a fellow engineer. That’s the value.
By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage. Might as well write the code yourself and just use AI as first-pass peer review on the code you’ve written.
A lot of the things the writer points out also feel like safeguards against the pitfalls of older models.
I do agree with their 12th point. The smaller your task the easier to verify that the model hasn’t lost the plot. It’s better to go fast with smaller updates that can be validated, and the combination of those small updates gives you your final result. That is still agile without going full “specifications document” waterfall.
It’s a solid post overall and even for people with a lot of experience there’s some good ideas in here. “Identify and mark functions that have a high security risk, such as authentication, authorization” is one such good idea - I take more time when the code is in these areas but an explicit marking system is a great suggestion. In addition to immediate review benefits, it means that future updates will have that context.
“Break things down” is something most of us do instinctively now but it’s something I see less experienced people fail at all the time.
Brain surgery is probably a bad example... or maybe a good one, but for different reasons?
Brain surgery is highly technical AND highly vibe based.
You need both in extremely high quantities. Every brain is different, so the super detailed technical anatomies that we have is never enough, and the surgeon needs constant feedback (and insanely long/deep focus).
Religiously, routinely refactor. After almost every feature I do a feature level code analysis and refactoring, and every few features - codebase wide code analysis and refactoring.
I am quite happy with the resulting code - much less shameful than most things I've created in 40 years of being passionate about coding.
This. Historically there's been a lot of resistance to the idea of refactoring or refining features. The classic "It works, just ship it" mentality that leaves mountains of tech debt in its wake.
And there _was_ a good reason to resist refactoring. It takes time and effort! After "finishing" something, the timeline, the mental and physical energy, the institutional support, is all dried up. Just ship it and move on.
But LLMs change the equation. There's no reason to leave sloppy sub-optimal code around. If you see something, say something. Wholesale refactoring your PR is likely faster than running your test suite. Literally no excuses for bad code anymore.
You'd think it didn't need to be said but, given we have a tool to make coding vastly more efficient, some people use that tool to improve quality rather than just pump out more quantity.
We are becoming spec writers, wearing the PM/lead hats.
1) Do a gap and needs assessment.
2) Build business requirements.
3) Define scope of work to advance fulfillment.
4) Create functional and non-functional specs.
5) Divide-conquer-refine loop.
This is the main thing I have learned too. I've been building an internal tool for myself to annotate lines in each commit diff as good (green) / needs refactor (yellow) / needs rewrite (red) and it has helped me keep track of this kind of tech debt. Basically does what you could do with "TODO refactor" comments all over, but is more comprehensive and doesn't litter your source code. Plan to open source it once I've dog-fooded it a little more
I can't help but keep finding it ridiculous how everyone now discovers basic best practices (linting, documentation, small incremental changes) that have been known for ages. It's not needed because of AI, you should have been doing it like this before as well.
Anyone who’s been a developer for more than 10 minutes knows that best practices are hard to always follow through on when there’s pressure to ship.
But there’s more time to do some of these other things if the actual coding time is trending toward zero.
And the importance of it can go up with AI systems because they do actually use the documentation you write as part of their context! Direct visible value can lead people to finally take more seriously things that previously felt like luxuries they didn’t have time for.
Again if you’ve been a developer for more than 10 minutes, you’ve had the discouraging experience of pain-stakingly writing very good documentation only for it to be ignored by the next guy. This isn’t how LLMs work. They read your docs.
> Anyone who’s been a developer for more than 10 minutes knows that best practices are hard to always follow through on when there’s pressure to ship.
>
But there’s more time to do some of these other things if the actual coding time is trending toward zero.
I think you'll find even less time - as "AI" drives the target time to ship toward zero.
Remember having to write detailed specs before coding? Then folks realized it was faster and easier to skip the specs and write the code? So now are we back to where we were?
One of the problems with writing detailed specs is it means you understand the problem, but often the problem is not understand - but you learn to understand it through coding and testing.
Skip specs, and you often ended up writing the wrong program - at substantial cost.
The main difference now is the parrots have reduced the cost of the wrong program to near zero, thereby eliminating much of the perceived value if a spec.
We’re not „thinking with portals” about these things enough yet. Typically we’d want a detailed spec beforehand, as coding is expensive and time consuming, thus we want to make sure we’re coding the right thing. With AI though, coding is cheap. So let AI skip the spec and write the code badly. Then have it review the solution, build understanding, design a spec for better solution and have it write it again. Rinse and repeat as many times you need.
It’s also nothing new, as it’s basically Joe Armstrong's programming method. It’s just not prohibitively expensive for the first time in history.
Spec-driven development is the only reliable way to work with AI. That's my current understanding. I spend more time refining the spec and bouncing ideas off of AI/team than before, which is good before there can't be any incorrect assumptions or hidden variables, otherwise AI will create suboptimal code. We should have been doing this much earlier in the process, even without AI, but now it's more necessary than ever. If you keep asking AI to make small changes as you learn about the business domain of your project, it will create a mess, in my experience. It's better to start from scratch and ask it to reimplement, if you finally understand all the requirements.
Sentiments like this make me wonder if perhaps the dream of the 90s was just ahead of its time. Things like UML, 4GLs, Rational were all being hyped. We were told that the future was a world where people could express the requirements & shape of the system, and the machines would do the rest.
Clearly that didn't happen, and then agile took over from the more waterfall/specs based approaches, and the rest was history.
But now we're entering a world where the state of the art is expressing your requirements & shape of the system. Perhaps this is just part of a broader pendulum swing, or perhaps the 1990s hopes & dreams finally caught up with technology.
I think PG said something about sitting down and hacking being how you understand the problem, and it’s right. You can write UML after you’ve got your head round it, but the feedback loop when hacking is essential.
Yes and no I'd say.
It's still the case that now only by iterating and testing things with the AI you get closer to an actually good solution.
So up front big spec will also not work so well.
The only exception maybe if you already have a very clear understanding and existing tests (like what they did with the Claude's building the rust c compiler to compile the Linux kernel)
Ah yes, to have AI write code for you, you simply just need to, let's see ..
"Document the requirements, specifications, constraints, and architecture of your project in detail. Document your coding standards, best practices, and design patterns. Use flowcharts, UML diagrams, and other visual aids to communicate complex structures and workflows. Write pseudocode for complex algorithms and logic to guide the AI. Develop efficient debug systems for the AI to use. Build a system that collects logs from all nodes in a distributed system and provides abstracted information. Use a system that allows you to mark how thoroughly each function has been reviewed. Write property based high level specification tests yourself. Use strict linting and formatting rules to ensure code quality and consistency. Utilize path specific coding agent prompts. Provide as much high level information as practical, such as coding standards, best practices, design patterns, and specific requirements for the project. Identify and mark functions that have a high security risk, such as authentication, authorization, and data handling. Make sure that the AI is instructed to change the review state of these functions as soon as it changes a single character in the function. Developers must make sure that the status of these functions is always correct. Explore different solutions to a problem with experiments and prototypes with minimal specifications. Break down complex tasks into smaller, manageable tasks for the AI. You have to check each component or module for its adherence to the specifications and requirements."
And just like that, easy peasy, nothing to it.
As a supreme irony, the story currently on the front page directly under this one ('You are here'), makes the claim "The cost of turning written business logic into code has dropped to zero. Or, at best, near-zero." in the very first sentence.
Too bad that software developers are carrying water for those who hate them and mock them for being obsolete in 6-12 months, while they are eating caviar (probably evading sanctions) and clink the champagne glasses in Davos:
The enthusiasm so many devs show for it is also quite bizarre, saying things like "AI makes me so much more productive," with the implication that they will be its primary beneficiaries, and that it won't result in a massive reduction in demand, compensation, and status for developers, adversely affecting them. Even more bizarre when you realize these devs aren't the ones optimizing some popular video codec or writing avionics software for a fighter jet, but instead gluing together NPM packages--probably the first or second rung on on the software "innovator's dilemma" ladder of disruption.
The funny thing is, when I got a lead position in my job, I just to do real detailed ticket descriptions, going into technical considerations and possible cross domain problems. I did it for the juniors - and to be honest - for my self, since I know if I took that ticket, from that moment to the moment I put some code down I could just forget stuff.
This was pushed back hard by management because it "took too much time to create a ticket". I fought it for some months but at the end I stopped and also really lose the ability and patience of do that. Juniors suffered, implementation took more time. Time passed.
Now, I am supposed to do the exact same thing, but even better and for yesterday.
I'm still writing code. I'm doing it to solve a problem, there's more to writing code than than typing. Recently AI massively simplified "getting started", and all of the tips here are applicable to working well on a team.
My recent experience: I'm porting an app to Mac. It's been in my backlog for ~2 years. With Claude I had a functional prototype in under a day getting the major behavior implemented. I spent the next two weeks refactoring the original app to share as much logic as possible. The first two days was lots of fun. The refactoring was also something I wanted to flush out unit tests, still enjoyable.
The worst part was debugging really bugs introduced to my code from 5 years ago. My functions had naming issues describing the behavior wrong, confusing Claude, that I needed to re-understand to add new features.
Parts of coding are frustrating. Using AI is frustrating for different reasons.
The most frustrating part was rebasing with git to create a sensible history (which I've had to do without AI in the past), reviewing the sheer volume of changes (14k lines) and then deciding "do I want my name on this" which involved cleaning up all the linter warnings I'd self imposed on myself.
I’m finding it to be the opposite. I used to love writing everything by hand but now Claude is giving me the ability to focus more on architecture. I like just sitting down with my coffee and thinking about the next part of my project, how I’d like it to be written and Claude just fills it in for me. It makes mistakes at times but it also finds a lot of mine that I hadn’t even realized were in my code base.
Yep, I get that some people love the act of literally typing "x = 2;" but to me coding is first and foremost problem solving. I have a problem (either truly mine or someone else's), I come up with a solution in my head and slowly implement it.
Before I also had to code it and then make sure it had no issues.
Now I can skip the coding and then just have something spit out something which I can evaluate whether I believe is a good implementation of my solution or not.
Of course, you need the skill to know good from bad but for medium to senior devs, AI is incredibly useful to get rid of the mundane task of actually writing code, while focusing on problem solving with critical review of magically generated code.
A good bit of scaffolding and babysitting allows you to let the model run much faster and more efficiently. Building your tool faster. I don't code to code, I code to build something I want.
Also there is no "compiler" and "type checker" for your SPEC. If you get something wrong in some paragraph somewhere and or contradict something in your spec X paragraphs later - you have to use Mark-1 EyeBall to detect and fix this.
You have just transformed your job from developer to manual spec maintainer - a clerk who has to painstakingly check everything.
Define data structures manually, ask AI to implement specific state changes. So JSON, C .h or other source files of func sigs and put prompts in there. Never tried the Agents.md monolithic definition file approach
Also I demand it stick to a limited set of processing patterns. Usually dynamic, recursive programming techniques and functions. They just make the most sense to my head and using one style I can spot check faster.
I also demand it avoid making up abstractions and stick to mathematical semantics. Unique namespaces are not relevant to software in the AI era. It's all
about using unique vectors as keys to values.
Stick to one behavior or type/object definition per file.
Only allow dependencies that are designed as libraries to begin with. There is a ton of documentation to implement a Vulkan pipeline so just do that. Don't import an entire engine like libgodot.
And for my own agent framework I added observation of my local system telemetry via common Linux files and commands. This data feeds back in to be used to generate right-sized sched_ext schedules and leverage bpf for event driven responses.
Am currently experimenting with generation of small models of my own data. A single path of images for example not the entire Pictures directory. Each small model is spun akin to a Docker container.
LLMs are monolithic (massive) zip files of the entire web. No one really asking for that. And anyone who needs it already has access to the web itself
small agents.md files are worth it, at least for holding some basic information (look at build.md to read how to build, the file structure looks like so), rather than have whatever burn double the amount of tokens searching for whatever anyways.
What is the ratio of Markdown to Code with these agents? How readable is the Markdown after you've finished using it to develop your plan? How much time does it take to review code so closely vs. writing it yourself in the first place?
The forcing function doesn't disappear - it shifts. When you read and critique AI-generated code carefully, you get a similar cognitive workout: Why did it structure this that way? What edge case did it miss? How does this fit the broader architecture?
The danger is treating the output as a black box. If you skip the review step and just accept whatever it produces, yes, you'll lose proficiency and accumulate debt. But if you stay engaged with the code, reading it as critically as you would a junior dev's PR, you maintain your understanding while moving faster.
The technical debt concern is valid but it's a process problem, not an inherent flaw. We solved "juniors write bad code" with code review, linting, and CI. We can solve "LLMs write inconsistent code" with the same tools - hannofcart's 10-layer static analysis stack is a good example. The LLM lies about passing checks? Pre-commit hook catches it.
Pre commit hook is definitely necessary. One thing I’ve seen a lot with Opus recently is it lying that a new linter warning or error was there before it made a change. They’ve learned from us too well!
The first rule is an antipattern. I think describing your architecture or ANY kind of documentation for your AI is an anti-pattern and blows the context window leading to worse results, and actual more deviation.
The controlling systems are not give it more words at the start. Agentic coding needs to work in loop with dedicated context.
You need to think about how can i give as much intent as possible with as little words.
You can built a tremendous amount of custom lint rules ai never needs to read except they miss it.
Every pattern in your repo gets repeated, repo will always win over documentation and when your repo is good structured you don’t need to repeat this to AI
It’s like dev always has been, watch what has gone wrong and make sure the whole type or error can’t happen again.
it really does seem like this... also new devs are like that too: "i just copied this pattern use over here and there whats wrong?" is something i've heard over and over lol
i think languages that allow expression of "this is deprecated, use x instead" will be usefull for that too
AI gets a lot of big projects right if you give at all the tools to verify its own implementation if you can build a proper system to verify the solution it works astonishly good. Even Opus 4.6 judgement seems to be wrong most of the time on projects of my scale pre the validation layers.
In general, I prefer to do the top-level design and the big abstractions myself. I care a lot about cohesion and coupling; I like to give a lot of thought to my interfaces.
And in practice, I am happy enough that the LLM helps me to eliminate some toil, but I think you need to know when it is time to fold your cards, and leave the game. I prefer to fix small bugs in the generated code myself, than asking the agent, as it tends to go too far when fixing its own code.
The best thing about this is that AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now by some of the smartest coders in the world and the next gen AI will incorporate all of this, making them ironically unnecessary.
Strange since, in practice, coding models have steadily improved without any backward movement every 3-4 months for 2 years now. It's as if there are rigorous methods of filtering and curation applied when building your training data.
Ironically, I use the time saved using agents to read technical books ferociously.
Coding agents made me really get something back from the money I pay for my O'Reilly subscription.
So, coding agents are making me a better engineer by giving me time to dive deeper into books instead of having to just read enough to do something that works under time pressure.
1. Keep things small and review everything AI written, or
2. Keep things bloated and let AI do whatever it wants within the designated interface.
Initially I drew this line for API service / UI components, but it later expanded to other domains. e.g. For my hobby rust project I try to keep "trait"s to be single responsible, never overlap, easy to understand etc etc. but I never look at AI generated "impl"s as long as it passes some sensible tests and conforming the traits.
I find rust generally easier to reason about, but can't stand writing it.
The compiler works well with LLMs plenty of good tooling and LSPs.
If I'm happy with the shape of the code and I usually write the function signatures/ Module APIs. And the compiler is happy with it compiling. Usually the errors if any are logical ones I should catch in reviews.
So I focus on function, compiler focuses on correctness and LLM just does the actual writing.
I found an easier way that Works For Me (TM). I describe the problem to LLM and ask it to solve it step by step, but strictly in the Ask mode, not Agent. Then I copy or even type the linws to the code. If I wouldn't write the line myself, it doesn't go in, and I iterate some more.
I do allow it to write the tests (lots of typing there), but I break them manually to see how they fail. And I do think about what the tests should cover before asking LLM to tell me (it does come up with some great ideas, but it also doesn't cover all the aspects I find important).
Great tool, but it is very easy to be led astray if you are not careful.
even if you check and redo after paste, you need to check for gotchas. I wish I had a nickel for every time the llm gave me a solution with a hidden limitation. assume that it violates all your unspoken assumptions, and adheres only to what you nailed down in your prompt
You still need to know the hard parts: precisely what you want to build, all domain/business knowledge questions solved, but this tool automates the rest of the coding and documentation and testing.
It's going to be a wild future for software development...
Same. Small units if work, iterate in it till it's right, commit it, push it, then do the next increment of work. It's how I've always worked like that, except now, I sometimes let someone else figure the exact API calls (I'm still learning react, but Claude helps get the basics in place for me). If the AI just keeps screwing up, I'll grab the wheel and do it myself. It sometimes helps me get things going, but it hasn't been a huge increase in productivity, but I'm not paying the bill so whatever.
so is the 10-20% in velocity worth the money and the process-complexity added? I'm assuming you're measuring your own velocity, not your team's, since that includes time to review and deploy etc.
Every engineering org should be pleading devs to not let AI write tests. They're awful and sometimes they literally don't even assert the code that was generated and instead assert the code in tests.
Every engineering org should be pleading devs to not let AI write code, period. They continue to routinely get stuff wrong and can't be trusted any further than you can throw them.
I use it for scaffolding and often correct it for the layour I prefer. Then I use to check my code, and then scaffold in some more modules. I then connect them together.
Long as you review the code and correct it, it is no more different than using stackoverflow. A stack overflow that reads your code and helps stitch the context.
"Stack Overflow that reads your codebase" — perfect. But Stack Overflow is
stateless. Agent sessions aren't.
One session's scaffold assumes one pattern. Second session scaffold contradicts it. You reviewed both in isolation. Both looked fine. Neither knows about the other.
Reviewing AI code per-session is like proofreading individual chapters of a novel nobody's reading front to back. Each chapter is fine. The plot makes no sense.
Yes this is strange. There's nothing of substance here that hasn't been repeated many times before. BUT...I did click it out of interest. So maybe it just came at an opportune lull in these types of posts and during the inflection point of 4.6 and 5.3 release. Complete guess
Hi i5heu. Given that you seem to use AI tools for generating images and audio versions of your posts, I hope it is not too rude to ask: how much of the post was drafted, written or edited with AI?
The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
If this was written with help of AI, I'd personally appreciate a small notice above the blog post. If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
i have written this text by myself except like 2 or 3 sentences which i iterated with an LLM to nail down flow and readability. I would interpret that as completely written by me.
> The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
Before i wrote this text, i also asked Gemini Deep Research but for me the results where too technical and not structural or high level as i describe them here. Hence the blogpost to share what i have found works best.
> If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
I have pondered the idea and also wrote a few anecdotal experiences but i deleted them again because i think it is hard to nail the right balance down and it is also highly depended on the project, what renders examples a bit useless.
And i also kind of like the short and lean nature of it the last few days when i worked on the blogpost.
I might will make a few more blogposts about that, that will expand a few points.
First article about writing code with AI i can get behind 100%. Stuff i already do, stuff i've thought about doing, and at ideas i've never thought doing ("Mark code review levels" especially is a _great_ idea)
> Use strict linting and formatting rules to ensure code quality and consistency. This will help you and your AI to find issues early.
I've always advocated for using a linter and consistent formatting. But now I'm not so sure. What's the point? If nobody is going to bother reading the code anymore I feel like linting does not matter. I think in 10 years a software application will be very obfuscated implementation code with thousands of very solidly documented test cases and, much like compiled code, how the underlying implementation code looks or is organized won't really matter
That's the opposite. I've never read and re-read code more than i do today. The new hires generate 50 more code than they use to, and you _have_ to check it or have compounding production issues (been there, done that). And the errors can now be anywhere, when before you more or less knew what the person writing code is thinking and can understand why some errors are made. LLMs errors could hide _anywhere_, so you have to check it all.
Isn't that a losing proposition? Or do you get 50 times the value out of it too? In my experience the more verbose the code is, the less thought out it is. Lots of changes? Cool, now polish some more and come back when it's below 100 lines change, excluding tests and docs. I don't dare touch it before.
That sounds like the advice of someone who doesn't actually write high-quality code. Perhaps a better title would be "how to get something better than pure slop when letting a chatbot code for you" - and then it's not bad advice I suppose. I would still avoid such code if I can help it at all.
This take is pretty uncharitable. I write high quality code, but also there's a bunch of code that could be useful, but that I don't write because it's not worth the effort. AI unlocks a lot of value in that way. And if there's one thing my 25 years as a software engineer has taught me is that while code quality and especially system architecture matter a lot, being super precious about every line of code really does not.
Don't get me wrong, I do think AI coding is pretty dangerous for those without the right expertise to harness it with the right guardrails, and I'm really worried about what it will mean for open source and SWE hiring, but I do think refusing to use AI at this point is a bit like the assembly programmer saying they'll never learn C.
Man, you are really missing out of the biggest revolution of my life.
This is the opinion of someone who has not tried to use Claude Code, in a brand new project with full permissions enabled, and with a model from the last 3 months.
People have been saying "the models from (recent timeframe) are so much better than the old ones, they solve all the problems" for years now. Since GPT-4 if not earlier. Every single time, those goalposts have shifted as soon as the next model came out. With such an abysmal track record, it's not reasonable to expect people to believe that this time the tool actually has become good and that it's not just hype.
The article did not provide a constructive suggestion on how to write quality code, either. Nor even empirical proof in the form of quality code written by LLMs/agents via the application of those principles.
> workers who opposed the use of certain types of automated machinery due to concerns relating to worker pay and output quality... Luddites were not opposed to the use of machines per se (many were skilled operators in the textile industry); they attacked manufacturers who were trying to circumvent standard labor practices of the time.
I wonder at the end of this if it's the still worth the risk?
A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Maybe it's the kind of work I'm doing, or maybe I just suck, but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
I second this. This* is the matter against which we form understanding. This here is the work at hand, our own notes, discussions we have with people, the silent walk where our brain kinda process errors and ideas .. it's always been like this since i was a kid, playing with construction toys. I never ever wanted somebody to play while I wait to evaluate if it fits my desires. Desires that often come from playing.
Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.
Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
I'll push it back against this a little bit. I find any type of deliberative thinking to be a forcing function. I've recently been experimenting with writing very detailed specifications and prompts for an LLM to process. I find that as I go through the details, thoughts will occur to me. Things I hadn't thought about in the design will come to me. This is very much the same phenomenon when I was writing the code by hand. I don't think this is a binary either or. There are many ways to have a forcing function.
2 replies →
My think/create/solve focus is on making my agentic coding environment produce high quality code with the least cost. Seems like a technical challenge worth playing with.
It probably helps that I have 40 years of experience with producing code the old ways, including using punch cards in middle school and learning basic on a computer with no persistent storage when I was ten.
I think I've done enough time in the trenches and deserve to play with coding agents without shame.
> I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
people seem to have a inability to predict second and third order effects
the first order effect is "I can sip a latte while the bot does my job for me"... well, great I suppose, while it lasts
but the second order effect is: unless you're in the top 10%, you will now lose your job, permanently
and the third order effect is the economy collapses as it is built on consumer spending
8 replies →
Actually for me it was the opposite: before I wasn't able to play around and experiment in my free time that much, because I didn't have enough energy left to actualize the thoughts and ideas I have since I have a day job.
Now, since the bottleneck of moving the fingers to write code has gone down, I actually started to enjoy doing side projects. The mental stress from writing code has gone down drastically with Claude Code, and I feel the urge to create more nowadays!
1 reply →
I wonder over the long term how programmers are going to maintain the proficiency to read and edit the code that the LLM produces.
22 replies →
>I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
+100 for this.
> I see people at work who are drooling about being able to have code made for them
These people just drool at being able to have work done for them to begin with. Are you sure it is just "code"?
> I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
In my circles see some overlap with the people who are like: "Done! Let's move on" and don't worry about production bugs, etc. "We'll fix it later".
I've always stressed out about introducing bugs and want to avoid firefighting (even in orgs where that's the way to get noticed).
Too much leaning on coding tools and agents feels to sketchy to someone like me right now (maybe always tbh)
Everything you have said here is completely true, except for "not in that group": the cost-benefit analysis clearly favors letting these tools rip, even despite the drawbacks.
44 replies →
That's also how I feel.
I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.
I think we really need to have a serious think of what is "good quality" in the age of coding agents. A lot of the effort we put into maintaining quality has to do with maintainability, readability etc. But is it relevant if the code isn't for humans? What is good for a human is not what is good for an AI necessarily (not to say there is no overlap). I think there are clearly measurable things we can agree still apply around bugs, security etc, but I think there are also going to be some things we need to just let go of.
13 replies →
> I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Also we live in a capitalist society. The boss will soon ask: "Why the fuck am I paying you to sip a latte in a bar? While am machine does your work? Use all your time to make money for me, or you're fired."
AI just means more output will be expected of you, and they'll keep pushing you to work as hard as you can.
8 replies →
I still do this, but when I'm reviewing what's been written and / or testing what's been built.
How I see it is we've reverted back to a heavier spec type approach, however the turn around time is so fast with agents that it still can feel very iterative simply because the cost of bailing on an approach is so minimal. I treat the spec (and tests when applicable) as the real work now. I front load as much as I can into the spec, but I also iterate constantly. I often completely bail on a feature or the overall approach to a feature as I discover (with the agent) that I'm just not happy with the gotchas that come to light.
AI agents to me are a tool. An accelerator. I think there are people who've figured out a more vibey approach that works for them, but for now at least, my approach is to review and think about everything we're producing, which forms my thoughts as we go.
>but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
This is so on point. The spec as code people try again and again. But reality always punches holes in their spec.
A spec that wasn't exercised in code, is like a drawing of a car, no matter how detailed that drawing is, you can't drive it, and it hides 90% of the complexity.
To me the value of LLMs is not so much in the code they write. They're usually to verbose, start building weird things when you don't constantly micromanage them.
But you can ask very broad questions, iteratively refine the answer, critique what you don't like. They're good as a sounding board.
I love using LLMs as well as rubber ducks - what does this piece of code do? How would you do X with Y? etc.
The problem is that this spec-driven philosophy (or hype, or mirage...) would lead to code being entirely deprecated, at least according to its proponents. They say that using LLMs as advisors is already outdated, we should be doing fully agentic coding and just nudge the LLM etc. since we're losing out on 'productivity'.
1 reply →
Historically software engineering has been seen as "assembly line" work by a lot of people (see all the efforts to outsource it through spec handoffs and waterfall through the years) but been implemented in practice as design-as-you-build (nobody anticipates all the questions or edge cases in advance, software specs are often an order of magnitude simpler than the actual number of branches in the code).
For mission-critical applications I wonder if making "writing the actual code" so much cheaper means that it would make more sense to do more formal design up front instead, when you no longer have a human directly in the loop during the writing of the code to think about those nasty pops-up-on-the-fly decisions.
> software specs are often an order of magnitude simpler than the actual number of branches in the code
Love this! Be it design specs or a mock from the designer. So many unaccounted for decisions. Good devs will solve many on their own, uplevel when needed, and provide options.
And absolutely it means more design up front. And without human in the direct loop, maybe people won’t skimp on this!
I also second this. I find that I write better by hand, although I work on niche applications it’s not really standard crud or react apps. I use LLMs in the same way i used to used stack overflow, if I go much farther to automate my work than that I spend more time on cleanup compared to if I just write code myself.
Sometimes the AI does weird stuff too. I wrote a texture projection for a nonstandard geometric primitive, the projection used some math that was valid only for local regions… long story. Claude kept on wanting to rewrite the function to what it thought was correct (it was not) even when I directed to non related tasks. Super annoying. I ended up wrapping the function in comments telling it to f#=% off before it would leave it alone.
yea, same here.
i've asked an ai to plan and setup some larger non straight forwards changes/features/refactorings but it usually devolves into burning tokens and me clicking the 'allow' button and re-clarifying over and over when it keeps trying to confirm the build works etc...
when i'm stuck though, or when im curious of some solution it usually opens the way to finish the work similar to stack overflow
Exactly. 30 years ago a mathematician I knew said to me: "The one thing that you can say for programming is that it forces you to be precise."
We vibe around a lot in our heads and that's great. But it's really refreshing, every so often, to be where the rubber meets the road.
Using AI or writing your own code isn't an xor thing. You can still write the code but have a coding assistant or something an alt/cmd-tab away. I enjoy writing code, it relaxes me so that's what I do but when I need to look something up or i'm not clear on the syntax for some particular operation instead of tabbing to a browser and google.com I tab to the agent and ask it to take a look. For me, this is especially helpful for CSS and UI because I really suck at and dislike that part of development.
I also use these things to just plan out an approach. You can use plan mode for yourself to get an idea of the steps required and then ask the agent to write it to a file. Pull up the file and then go do it yourself.
In 1987 when I first started coding, I would either write my first attempt in BASIC and see it was too slow and rewrite parts in assembly or I would know that I had to write what I wanted from the get go in assembly because the functionality wasn’t exposed at all in BASIC (using the second 64K of memory or using double hires graphics).
This past week, I spent a couple of days modifying a web solution written by someone else + converting it from a Terraform based deployment to CloudFormation using Codex - without looking at the code as someone who hasn’t done front in development in a decade - I verified the functionality.
More relevantly but related, I spent a couple of hours thinking through an architecture - cloud + an Amazon managed service + infrastructure as code + actual coding, diagramming it, labeling it , and thinking about the breakdown and phases to get it done. I put all of the requirements - that I would have done anyway - into a markdown file and told Claude and Codex to mark off items as I tested each item and summarize what it did.
Looking at the amount of work, between modifying the web front end and the new work, it would have taken two weeks with another developer helping me before AI based coding. It took me three or four days by myself.
The real kicker though is while it worked as expected for a couple of hundred documents, it fell completely to its knees when I threw 20x documents into the system. Before LLMs, this would have made me look completely incompetent telling the customer I now wasted two weeks worth of time and 2 other resources.
Now, I just went back to the literal drawing board, rearchitected it, did all of the things with code that the managed services abstracted away with a few tweaks, created a new mark down file and was done in a day. That rework would have taken me a week by itself. I knew the theory behind what the managed service was doing. But in practice I had never done it.
It’s been over a decade where I was responsable for a delivery that I could do by myself without delegating to other people or that was simple enough that I wouldn’t start with a design document for my own benefit. Now within the past year, I can take on larger projects by myself without the coordination/“mythical man Month” overhead.
I can also in a moment of exasperation say to Codex “what you did was an over complicated stupid mess, rethink your implementation from first principles” without getting reported to HR.
There is also a lot of nice to have gold plating that I will do now knowing that it will be a lot faster
That's because many developers are used to working like this.
With AI, the correct approach is to think more like a software architect.
Learning to plan things out in your head upfront without to figure things out while coding requires a mindset shift, but is important to work effectively with the new tools.
To some this comes naturally, for others it is very hard.
> Learning to plan things out in your head
I dont think any complex plan should be planned in your head. But drawing diagrams, sketching components, listing pros and cons, 100%. Not jumping directly into coding might look more like jumping into spec writing a poc
2 replies →
I think what GP is referring too are technical semantics and accidental complexity. You can’t plan for those.
The same kind of planning you’re describing can and do happen sans LLM, usually on the sofa, or in front of a whiteboard. Or by reading some research materials. No good programmer rushes to coding without a clear objective.
But the map is not the territory. A lot of questions surface during coding. LLMs will guess and the result may be correct according to the plan, but technically poor, unreliable, or downright insecure.
Some people like to lay the brick, some people like to draw the blueprints. I don’t think there is anything wrong with not subscribing to this onslaught on AI tooling, doing the hard work is rewarding. Whether AI will become a standard in how code is written in the future is still to be determined and I think there is a real chance that is where it goes, it shouldn’t hinder your love for doing what you do.
100%. To me the real question is whether all the bother getting the agents to not waste time nets out to real gains, or perceived gains (while possibly even losing efficiency).
It's not at all clear to me which is true given the level of hype and antipathy out there. I'm just going to watch and wait, and experiment cautiously, till it's more clearcut.
Same reason people build their own homes by hand, for the challenge because YOLO.
I think of it differently: I’ve been coding so long that ironing out the details and working through the specification with AI comes extremely naturally. It’s like how I would talk to a colleague and iterate on their work. However, the quality of the code produced by LLMs needs to be carefully managed to assure it’s of a high standard. That’s why I formalized a system of checks and balances for my genetic coding that contains architectural guidelines as well as language, specific taste advice.
You can check it out here: https://ai-lint.dosaygo.com/
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
If you need that, don't use AI for it. What is it that you don't enjoy coding or think it's tangential to your thinking process? Maybe while you focus on the code have an agent build a testing pipeline, or deal with other parts of the system that is not very ergonomic or need some cleanup.
this is the right answer, but many companies mandate to use ai (burn x tokens and y percent of code) now, so people are bound to use it where it might not fit
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Two principles I have held for many years which I believe are relevant both to your sentiment and this thread are reproduced below. Hopefully they help.
First:
And:
To your first point - so are my many markdown files that I tell Codex/Claude to keep updated while I’m doing my work including telling them to keep them updated with why I told them to do certain things. They have detailed documentation of my initial design goals and decisions that I wrote myself.
Actually those same markdown files answer the second question.
2 replies →
> If a developer cannot be bothered with answering why the code exists, why bother to work with them?
Most people can't answer why they themselves exist, or justify why they are taking up resources rather than eating a bullet and relinquishing their body-matter.
According to the philosophy herein, they are therefore worthless and not worth interacting with, right?
I liken it to manual versus automated industrial production. I think manual coding will always have its place just like how there are even still people who craft things by manual labor, whether it’s woodworkers only using manual tools or blacksmiths who still manually stoke coke fires that produce very unique and custom products; vs the highly automated production lines we have that produce acceptable forms of something efficiently, and many of them so many people can have them.
This is exactly the issue I’m facing especially when working with AI-generated codebases.
Coding is significantly faster but my understanding of the system takes a lot longer because I’m having to merge my mental model with what was produced.
Any sufficiently detailed specification converges on code.
> A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
I completely agree but my thought went to how we are supposed to estimate work just like that. Or worse, planning poker where I'm supposed to estimate work someone else does.
I couldn't agree more. It's often when you are in the depth of the details that I make important decisions on how to engineer the continuation.
Yes, I look at this in a similar vein to the (Eval <--> Appply) Cycle in SICP textbook, as a (Design <--> Implement) cycle.
i go back and forth on this. when i'm working on something where the hard part is the actual algorithm, say custom scheduling logic or a non-trivial state machine, i need my hands in the code because the implementation is the thinking. but for anything where the complexity is in integration rather than logic, wiring up OAuth flows, writing CRUD endpoints, setting up CI pipelines, agents save me hours and the output is usually fine after one review pass. the "code as thought" argument is real but it applies to maybe 20% of what most of us ship day to day. the other 80% is plumbing where the bottleneck is knowing what to build, not how.
I am similar but I think we just have to adjust. Learn and improve writings specs with all the details.
Sounds like the coders equivalent of the Whorfian hypothesis.
I sometimes wonder if the economics of AI coding agents only work if you totally ignore all the positive externalities that come with writing code.
Is the entire AI bubble just the result of taking performance metrics like "lines of code written per day" to their logical extreme?
Software quality and productivity have always been notoriously difficult to measure. That problem never really got solved in a way that allowed non technical management to make really good decisions from the spreadsheet level of abstraction... but those are the same people driving adoption of all these AI tools.
Engineers sometimes do their jobs in spite of poor incentives, but we are eliminating that as an economic inefficiency.
I dunno. On the one hand, I keep hearing anecdata, including hackernews comments, friends, and coworkers, suggesting that AI-assisted coding is a literal game changer in terms of productivity, and if you call yourself a professional you'd better damn well lock the fuck in and learn the tools. At the extreme end this takes the form of, you're not a real engineer unless you use AI because real engineering is about using the optimal means to solve problems within time, scale, and budget constraints, and writing code by hand is now objectively suboptimal.
On the other hand, every time the matter is seriously empirically studied, it turns out that overall:
* productivity gains are very modest, if not negative
* there are considerable drawbacks, including most notably the brainrot effect
Furthermore, AI spend is NOT delivering the promised returns to the extent that we are now seeing reversals in the fortunes of AI stocks, up to and including freakin' NVIDIA, as customers cool on what's being offered.
So I'm supposed to be an empiricist about this, and yet I'm supposed to switch on the word of a "cool story bro" about how some guy built an app or added a feature the other day that he totally swears would have taken him weeks otherwise?
I'm like you. I use code as a part of my thought process for how to solve a problem. It's a notation for thought, much like mathematical or musical notation, not just an end product. "Programs must be written for people to read, and only incidentally for machines to execute." I've actually come to love documenting what I intend to do as I do it, esp. in the form of literate programming. It's like context engineering the intelligence I've got upstairs. Helps the old ADHD brain stay locked in on what needs to be done and why. Org-mode has been extremely helpful in general for collecting my scatterbrained thoughts. But when I want to experiment or prove out a new technique, I lean on working directly with code an awful lot.
I was just thinking this the other day after I did a coding screen and didn't do well. I know the script for the interviewee is your not suppsed to write any code until you talk through the whole thing, but I think i woukd have done better if I could have just wrote a bunch of throw away code to iterate on.
Are there still people under the impression that the correct way to use Stack Overflow all these years was to copy & paste without analyzing what the code did and making it fit for purpose?
If I have to say, we're just waiting for the AI concern caucus to get tired of performing for each other and justifying each other's inaction in other facets of their lives.
Lab-grown meat slop producer defends AI slop.
1 reply →
[dead]
The post touches very briefly on linting in 7. For me, setting up a large number of static code analysis checks has had the highest impact on code quality.
My hierarchy of static analysis looks like this (hierarchy below is Typescript focused but in principle translatable to other languages):
1. Typesafe compiler (tsc)
2. Basic lint rules (eslint)
3. Cyclomatic complexity rules (eslint, sonarjs)
4. Max line length enforcement (via eslint)
5. Max file length enforcement (via eslint)
6. Unused code/export analyser (knip)
7. Code duplication analyser (jscpd)
8. Modularisation enforcement (dependency-cruiser)
9. Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
10. Security check (semgrep)
I stitch all the above in a single `pnpm check` command and defined an agent rule to run this before marking task as complete.
Finally, I make sure `pnpm check` is run as part of a pre-commit hook to make sure that the agent has indeed addressed all the issues.
This makes a dramatic improvement in code quality to the point where I'm able to jump in and manually modify the code easily when the LLM slot machine gets stuck every now and then.
(Edit: added mention of pre-commit hook which I missed mention of in initial comment)
this is close to what i've landed on too. the pre-commit hook is non-negotiable. i've had Claude Code report "all checks pass" when there were 14 failing eslint rules. beyond the static analysis though, i keep hitting a harder problem: code that passes every lint rule, compiles clean, and greens the test suite but implements a subtly wrong interpretation of the spec. like an API handler that returns 200 with an empty array instead of 404, technically valid but semantically wrong. evaluating behavioural correctness against intent, not just syntax or type safety, is the gap nobody's really cracked yet. property-based testing helps but it still requires you to formalize the invariants upfront, which is often the hard part.
Not a catch all to fix issues agree with linting. Being very strict with linters has become very cheap with coding agents and it keeps you up to date with code standards and keeps code style homogenous which is very nice when you are reviewing professional code, regardless of who wrote it.
It’s also tricky otherwise if you have to occasionally review lazily written manual code mixed with syntactically formal/clean but functionally incorrect AI code.
My setup has some of the things mentioned and I found that occasionally the LLM will lie that something passes, when it doesn't.
Yup I have run into the same.
I use a pre-commit hook to run `pnpm check`. I missed mentioning it in the original comment. Your reply reminded me of it and I have now added it. Thanks.
1 reply →
If you're using Claude try the hookify plugin and ask if to block commits unless the rules pass.
Make the error message much more dramatic and it will be less likely to miss it. Create a wrapper if you can't change the error message.
Remember these are still fundamentally trained on human communication and Dale Carnegie had some good advice that also applies to language generators.
These kinda things aren’t really the issues I run into. Lack of clarity of thought, overly verbose code, needlessly defensive programming - the stuff that really rots a codebase. Honestly some of the above rules you have I’d want the LLM to ignore at the times if we’re going for maximum maintainability.
Very nice.
BUT, what is the point of max line length enforcement, just to see if there are crazy ternary operators going on?
It makes diff split view nicer to use.
At least this is the reason why I do use it
Except for dependency cruiser which I hadn't heard of, this is almost exactly what I've built up over the past few weeks.
For the pre-commit hook, I assume you run it on just the files changed?
> Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
Would you share this?
The real value that AI provides is the speed at which it works, and its almost human-like ability to “get it” and reasonably handle ambiguity. Almost like tasking a fellow engineer. That’s the value.
By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage. Might as well write the code yourself and just use AI as first-pass peer review on the code you’ve written.
A lot of the things the writer points out also feel like safeguards against the pitfalls of older models.
I do agree with their 12th point. The smaller your task the easier to verify that the model hasn’t lost the plot. It’s better to go fast with smaller updates that can be validated, and the combination of those small updates gives you your final result. That is still agile without going full “specifications document” waterfall.
It’s a solid post overall and even for people with a lot of experience there’s some good ideas in here. “Identify and mark functions that have a high security risk, such as authentication, authorization” is one such good idea - I take more time when the code is in these areas but an explicit marking system is a great suggestion. In addition to immediate review benefits, it means that future updates will have that context.
“Break things down” is something most of us do instinctively now but it’s something I see less experienced people fail at all the time.
> By the time you do everything outlined here you’ve basically recreated waterfall and lost all speed advantage.
Next: vibe brain surgery.
/i
Brain surgery is probably a bad example... or maybe a good one, but for different reasons?
Brain surgery is highly technical AND highly vibe based.
You need both in extremely high quantities. Every brain is different, so the super detailed technical anatomies that we have is never enough, and the surgeon needs constant feedback (and insanely long/deep focus).
I'd add:
Religiously, routinely refactor. After almost every feature I do a feature level code analysis and refactoring, and every few features - codebase wide code analysis and refactoring.
I am quite happy with the resulting code - much less shameful than most things I've created in 40 years of being passionate about coding.
This. Historically there's been a lot of resistance to the idea of refactoring or refining features. The classic "It works, just ship it" mentality that leaves mountains of tech debt in its wake.
And there _was_ a good reason to resist refactoring. It takes time and effort! After "finishing" something, the timeline, the mental and physical energy, the institutional support, is all dried up. Just ship it and move on.
But LLMs change the equation. There's no reason to leave sloppy sub-optimal code around. If you see something, say something. Wholesale refactoring your PR is likely faster than running your test suite. Literally no excuses for bad code anymore.
You'd think it didn't need to be said but, given we have a tool to make coding vastly more efficient, some people use that tool to improve quality rather than just pump out more quantity.
We are becoming spec writers, wearing the PM/lead hats.
1) Do a gap and needs assessment. 2) Build business requirements. 3) Define scope of work to advance fulfillment. 4) Create functional and non-functional specs. 5) Divide-conquer-refine loop.
This is the main thing I have learned too. I've been building an internal tool for myself to annotate lines in each commit diff as good (green) / needs refactor (yellow) / needs rewrite (red) and it has helped me keep track of this kind of tech debt. Basically does what you could do with "TODO refactor" comments all over, but is more comprehensive and doesn't litter your source code. Plan to open source it once I've dog-fooded it a little more
I can't help but keep finding it ridiculous how everyone now discovers basic best practices (linting, documentation, small incremental changes) that have been known for ages. It's not needed because of AI, you should have been doing it like this before as well.
Anyone who’s been a developer for more than 10 minutes knows that best practices are hard to always follow through on when there’s pressure to ship.
But there’s more time to do some of these other things if the actual coding time is trending toward zero.
And the importance of it can go up with AI systems because they do actually use the documentation you write as part of their context! Direct visible value can lead people to finally take more seriously things that previously felt like luxuries they didn’t have time for.
Again if you’ve been a developer for more than 10 minutes, you’ve had the discouraging experience of pain-stakingly writing very good documentation only for it to be ignored by the next guy. This isn’t how LLMs work. They read your docs.
> Anyone who’s been a developer for more than 10 minutes knows that best practices are hard to always follow through on when there’s pressure to ship. > But there’s more time to do some of these other things if the actual coding time is trending toward zero.
I think you'll find even less time - as "AI" drives the target time to ship toward zero.
1 reply →
These best practice protections become essential only when you give the work to really bad programmers - such as parrots.
Remember having to write detailed specs before coding? Then folks realized it was faster and easier to skip the specs and write the code? So now are we back to where we were?
One of the problems with writing detailed specs is it means you understand the problem, but often the problem is not understand - but you learn to understand it through coding and testing.
So where are we now?
Skip specs, and you often ended up writing the wrong program - at substantial cost.
The main difference now is the parrots have reduced the cost of the wrong program to near zero, thereby eliminating much of the perceived value if a spec.
We’re not „thinking with portals” about these things enough yet. Typically we’d want a detailed spec beforehand, as coding is expensive and time consuming, thus we want to make sure we’re coding the right thing. With AI though, coding is cheap. So let AI skip the spec and write the code badly. Then have it review the solution, build understanding, design a spec for better solution and have it write it again. Rinse and repeat as many times you need.
It’s also nothing new, as it’s basically Joe Armstrong's programming method. It’s just not prohibitively expensive for the first time in history.
Joe should sue.
1 reply →
Astronaut 1, AI-assisted developers: You mean, it's critical to plan and spec out what you want to write before you start in on code?
Astronaut 2, Tim Bryce: Always has been...
Spec-driven development is the only reliable way to work with AI. That's my current understanding. I spend more time refining the spec and bouncing ideas off of AI/team than before, which is good before there can't be any incorrect assumptions or hidden variables, otherwise AI will create suboptimal code. We should have been doing this much earlier in the process, even without AI, but now it's more necessary than ever. If you keep asking AI to make small changes as you learn about the business domain of your project, it will create a mess, in my experience. It's better to start from scratch and ask it to reimplement, if you finally understand all the requirements.
Sentiments like this make me wonder if perhaps the dream of the 90s was just ahead of its time. Things like UML, 4GLs, Rational were all being hyped. We were told that the future was a world where people could express the requirements & shape of the system, and the machines would do the rest.
Clearly that didn't happen, and then agile took over from the more waterfall/specs based approaches, and the rest was history.
But now we're entering a world where the state of the art is expressing your requirements & shape of the system. Perhaps this is just part of a broader pendulum swing, or perhaps the 1990s hopes & dreams finally caught up with technology.
Worked a lot with UML in industry and academia.
I think PG said something about sitting down and hacking being how you understand the problem, and it’s right. You can write UML after you’ve got your head round it, but the feedback loop when hacking is essential.
Yes and no I'd say. It's still the case that now only by iterating and testing things with the AI you get closer to an actually good solution. So up front big spec will also not work so well. The only exception maybe if you already have a very clear understanding and existing tests (like what they did with the Claude's building the rust c compiler to compile the Linux kernel)
Ah yes, to have AI write code for you, you simply just need to, let's see ..
"Document the requirements, specifications, constraints, and architecture of your project in detail. Document your coding standards, best practices, and design patterns. Use flowcharts, UML diagrams, and other visual aids to communicate complex structures and workflows. Write pseudocode for complex algorithms and logic to guide the AI. Develop efficient debug systems for the AI to use. Build a system that collects logs from all nodes in a distributed system and provides abstracted information. Use a system that allows you to mark how thoroughly each function has been reviewed. Write property based high level specification tests yourself. Use strict linting and formatting rules to ensure code quality and consistency. Utilize path specific coding agent prompts. Provide as much high level information as practical, such as coding standards, best practices, design patterns, and specific requirements for the project. Identify and mark functions that have a high security risk, such as authentication, authorization, and data handling. Make sure that the AI is instructed to change the review state of these functions as soon as it changes a single character in the function. Developers must make sure that the status of these functions is always correct. Explore different solutions to a problem with experiments and prototypes with minimal specifications. Break down complex tasks into smaller, manageable tasks for the AI. You have to check each component or module for its adherence to the specifications and requirements."
And just like that, easy peasy, nothing to it.
As a supreme irony, the story currently on the front page directly under this one ('You are here'), makes the claim "The cost of turning written business logic into code has dropped to zero. Or, at best, near-zero." in the very first sentence.
Too bad that software developers are carrying water for those who hate them and mock them for being obsolete in 6-12 months, while they are eating caviar (probably evading sanctions) and clink the champagne glasses in Davos:
https://xcancel.com/hamptonism/status/2019434933178306971
And all that after stealing everyone's output.
Underground Resistance Aims To Sabotage AI With Poisoned Data
https://news.ycombinator.com/item?id=46827777
Textile workers sabotage mechanical looms. History repeats itself.
4 replies →
I’ll believe it when those same engineers fix CC’s awful performance (mostly kidding, though I do wonder why they can’t. Feels like it’s doable).
In reality that man is hoping to IPO in 6-12 months, if anyone is wondering why the “use claude or you’re left behind” is so heavy right now.
The enthusiasm so many devs show for it is also quite bizarre, saying things like "AI makes me so much more productive," with the implication that they will be its primary beneficiaries, and that it won't result in a massive reduction in demand, compensation, and status for developers, adversely affecting them. Even more bizarre when you realize these devs aren't the ones optimizing some popular video codec or writing avionics software for a fighter jet, but instead gluing together NPM packages--probably the first or second rung on on the software "innovator's dilemma" ladder of disruption.
The funny thing is, when I got a lead position in my job, I just to do real detailed ticket descriptions, going into technical considerations and possible cross domain problems. I did it for the juniors - and to be honest - for my self, since I know if I took that ticket, from that moment to the moment I put some code down I could just forget stuff.
This was pushed back hard by management because it "took too much time to create a ticket". I fought it for some months but at the end I stopped and also really lose the ability and patience of do that. Juniors suffered, implementation took more time. Time passed.
Now, I am supposed to do the exact same thing, but even better and for yesterday.
Sounds like an awful lot of work and nannying just to avoid writing code yourself. Coding used to be fun and enjoyable once...
I'm still writing code. I'm doing it to solve a problem, there's more to writing code than than typing. Recently AI massively simplified "getting started", and all of the tips here are applicable to working well on a team.
My recent experience: I'm porting an app to Mac. It's been in my backlog for ~2 years. With Claude I had a functional prototype in under a day getting the major behavior implemented. I spent the next two weeks refactoring the original app to share as much logic as possible. The first two days was lots of fun. The refactoring was also something I wanted to flush out unit tests, still enjoyable.
The worst part was debugging really bugs introduced to my code from 5 years ago. My functions had naming issues describing the behavior wrong, confusing Claude, that I needed to re-understand to add new features.
Parts of coding are frustrating. Using AI is frustrating for different reasons.
The most frustrating part was rebasing with git to create a sensible history (which I've had to do without AI in the past), reviewing the sheer volume of changes (14k lines) and then deciding "do I want my name on this" which involved cleaning up all the linter warnings I'd self imposed on myself.
I’m finding it to be the opposite. I used to love writing everything by hand but now Claude is giving me the ability to focus more on architecture. I like just sitting down with my coffee and thinking about the next part of my project, how I’d like it to be written and Claude just fills it in for me. It makes mistakes at times but it also finds a lot of mine that I hadn’t even realized were in my code base.
Yep, I get that some people love the act of literally typing "x = 2;" but to me coding is first and foremost problem solving. I have a problem (either truly mine or someone else's), I come up with a solution in my head and slowly implement it.
Before I also had to code it and then make sure it had no issues.
Now I can skip the coding and then just have something spit out something which I can evaluate whether I believe is a good implementation of my solution or not.
Of course, you need the skill to know good from bad but for medium to senior devs, AI is incredibly useful to get rid of the mundane task of actually writing code, while focusing on problem solving with critical review of magically generated code.
A good bit of scaffolding and babysitting allows you to let the model run much faster and more efficiently. Building your tool faster. I don't code to code, I code to build something I want.
Also there is no "compiler" and "type checker" for your SPEC. If you get something wrong in some paragraph somewhere and or contradict something in your spec X paragraphs later - you have to use Mark-1 EyeBall to detect and fix this.
You have just transformed your job from developer to manual spec maintainer - a clerk who has to painstakingly check everything.
My tricks:
Define data structures manually, ask AI to implement specific state changes. So JSON, C .h or other source files of func sigs and put prompts in there. Never tried the Agents.md monolithic definition file approach
Also I demand it stick to a limited set of processing patterns. Usually dynamic, recursive programming techniques and functions. They just make the most sense to my head and using one style I can spot check faster.
I also demand it avoid making up abstractions and stick to mathematical semantics. Unique namespaces are not relevant to software in the AI era. It's all about using unique vectors as keys to values.
Stick to one behavior or type/object definition per file.
Only allow dependencies that are designed as libraries to begin with. There is a ton of documentation to implement a Vulkan pipeline so just do that. Don't import an entire engine like libgodot.
And for my own agent framework I added observation of my local system telemetry via common Linux files and commands. This data feeds back in to be used to generate right-sized sched_ext schedules and leverage bpf for event driven responses.
Am currently experimenting with generation of small models of my own data. A single path of images for example not the entire Pictures directory. Each small model is spun akin to a Docker container.
LLMs are monolithic (massive) zip files of the entire web. No one really asking for that. And anyone who needs it already has access to the web itself
small agents.md files are worth it, at least for holding some basic information (look at build.md to read how to build, the file structure looks like so), rather than have whatever burn double the amount of tokens searching for whatever anyways.
What is the ratio of Markdown to Code with these agents? How readable is the Markdown after you've finished using it to develop your plan? How much time does it take to review code so closely vs. writing it yourself in the first place?
The forcing function doesn't disappear - it shifts. When you read and critique AI-generated code carefully, you get a similar cognitive workout: Why did it structure this that way? What edge case did it miss? How does this fit the broader architecture?
The danger is treating the output as a black box. If you skip the review step and just accept whatever it produces, yes, you'll lose proficiency and accumulate debt. But if you stay engaged with the code, reading it as critically as you would a junior dev's PR, you maintain your understanding while moving faster.
The technical debt concern is valid but it's a process problem, not an inherent flaw. We solved "juniors write bad code" with code review, linting, and CI. We can solve "LLMs write inconsistent code" with the same tools - hannofcart's 10-layer static analysis stack is a good example. The LLM lies about passing checks? Pre-commit hook catches it.
Pre commit hook is definitely necessary. One thing I’ve seen a lot with Opus recently is it lying that a new linter warning or error was there before it made a change. They’ve learned from us too well!
The first rule is an antipattern. I think describing your architecture or ANY kind of documentation for your AI is an anti-pattern and blows the context window leading to worse results, and actual more deviation.
The controlling systems are not give it more words at the start. Agentic coding needs to work in loop with dedicated context.
You need to think about how can i give as much intent as possible with as little words.
You can built a tremendous amount of custom lint rules ai never needs to read except they miss it.
Every pattern in your repo gets repeated, repo will always win over documentation and when your repo is good structured you don’t need to repeat this to AI
It’s like dev always has been, watch what has gone wrong and make sure the whole type or error can’t happen again.
it really does seem like this... also new devs are like that too: "i just copied this pattern use over here and there whats wrong?" is something i've heard over and over lol
i think languages that allow expression of "this is deprecated, use x instead" will be usefull for that too
I’ve written about that a bit here https://jw.hn/dark-software-fabric
AI gets a lot of big projects right if you give at all the tools to verify its own implementation if you can build a proper system to verify the solution it works astonishly good. Even Opus 4.6 judgement seems to be wrong most of the time on projects of my scale pre the validation layers.
In general, I prefer to do the top-level design and the big abstractions myself. I care a lot about cohesion and coupling; I like to give a lot of thought to my interfaces.
And in practice, I am happy enough that the LLM helps me to eliminate some toil, but I think you need to know when it is time to fold your cards, and leave the game. I prefer to fix small bugs in the generated code myself, than asking the agent, as it tends to go too far when fixing its own code.
The best thing about this is that AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now by some of the smartest coders in the world and the next gen AI will incorporate all of this, making them ironically unnecessary.
None of this is new, it was pretty much all "best practice" for decades and so already in the training data for the first generation.
If the issue is SNR and the ratio of "good" vs "bad" practices in the input training corpus, I don't know if that's getting better.
Each extra generation of AI produced crap AI consumes as training, the worse it gets. This has been mathematically proven.
Strange since, in practice, coding models have steadily improved without any backward movement every 3-4 months for 2 years now. It's as if there are rigorous methods of filtering and curation applied when building your training data.
2 replies →
They will also be reading all of the slop generated by the current and previous generations of LLMs
> AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now
Yes!
> by some of the smartest coders in the world
Hmm... How will it filter out those by the dumbest coders in the world?
Including those by parrots?
Ironically, I use the time saved using agents to read technical books ferociously.
Coding agents made me really get something back from the money I pay for my O'Reilly subscription.
So, coding agents are making me a better engineer by giving me time to dive deeper into books instead of having to just read enough to do something that works under time pressure.
Some pattern I found from my hobby project.
1. Keep things small and review everything AI written, or 2. Keep things bloated and let AI do whatever it wants within the designated interface.
Initially I drew this line for API service / UI components, but it later expanded to other domains. e.g. For my hobby rust project I try to keep "trait"s to be single responsible, never overlap, easy to understand etc etc. but I never look at AI generated "impl"s as long as it passes some sensible tests and conforming the traits.
I'm finding Rust is perfect for me with LLMs.
I find rust generally easier to reason about, but can't stand writing it.
The compiler works well with LLMs plenty of good tooling and LSPs.
If I'm happy with the shape of the code and I usually write the function signatures/ Module APIs. And the compiler is happy with it compiling. Usually the errors if any are logical ones I should catch in reviews.
So I focus on function, compiler focuses on correctness and LLM just does the actual writing.
Do you think Rust will end up getting a boost from LLM adoption?
It definitely has for me! I just replied to the parent explaining why.
Tl;Dr I don't mind reading rust I hate writing it and the compiler meets me in the middle.
1 reply →
https://bcantrill.dtrace.org/2025/12/05/your-intellectual-fl...
I created my own Claude skill to enforce this and be sure it weaves in all the best practices we learned.
https://github.com/ryanthedev/code-foundations
I’m currently working on a checklist and profile based code review system.
I found an easier way that Works For Me (TM). I describe the problem to LLM and ask it to solve it step by step, but strictly in the Ask mode, not Agent. Then I copy or even type the linws to the code. If I wouldn't write the line myself, it doesn't go in, and I iterate some more.
I do allow it to write the tests (lots of typing there), but I break them manually to see how they fail. And I do think about what the tests should cover before asking LLM to tell me (it does come up with some great ideas, but it also doesn't cover all the aspects I find important).
Great tool, but it is very easy to be led astray if you are not careful.
[using an lmm as stack overflow]
even if you check and redo after paste, you need to check for gotchas. I wish I had a nickel for every time the llm gave me a solution with a hidden limitation. assume that it violates all your unspoken assumptions, and adheres only to what you nailed down in your prompt
The GSD tool (get-shit-done) automates a very similar process to this, and has been mind-blowing for larger projects and refactors.
https://github.com/glittercowboy/get-shit-done
You still need to know the hard parts: precisely what you want to build, all domain/business knowledge questions solved, but this tool automates the rest of the coding and documentation and testing.
It's going to be a wild future for software development...
My approach:
1. Have the LLM write code based on a clear prompt with limited scope 2. Look at the diff and fix everything it got wrong
That's it. I don't gain a lot in velocity, maybe 10-20%, but I've seen the code, and I know it's good.
Same. Small units if work, iterate in it till it's right, commit it, push it, then do the next increment of work. It's how I've always worked like that, except now, I sometimes let someone else figure the exact API calls (I'm still learning react, but Claude helps get the basics in place for me). If the AI just keeps screwing up, I'll grab the wheel and do it myself. It sometimes helps me get things going, but it hasn't been a huge increase in productivity, but I'm not paying the bill so whatever.
so is the 10-20% in velocity worth the money and the process-complexity added? I'm assuming you're measuring your own velocity, not your team's, since that includes time to review and deploy etc.
Every engineering org should be pleading devs to not let AI write tests. They're awful and sometimes they literally don't even assert the code that was generated and instead assert the code in tests.
Every engineering org should be pleading devs to not let AI write code, period. They continue to routinely get stuff wrong and can't be trusted any further than you can throw them.
I also made a list of tips on writing code with AI, with a special focus on security. Others may find the tips useful. Here they are: https://openssf.org/blog/2026/01/05/ai-software-development-...
I use it for scaffolding and often correct it for the layour I prefer. Then I use to check my code, and then scaffold in some more modules. I then connect them together.
Long as you review the code and correct it, it is no more different than using stackoverflow. A stack overflow that reads your code and helps stitch the context.
"Stack Overflow that reads your codebase" — perfect. But Stack Overflow is stateless. Agent sessions aren't.
One session's scaffold assumes one pattern. Second session scaffold contradicts it. You reviewed both in isolation. Both looked fine. Neither knows about the other.
Reviewing AI code per-session is like proofreading individual chapters of a novel nobody's reading front to back. Each chapter is fine. The plot makes no sense.
Why shallow and likely generated posts of a “Knowledge Management Advocate” get so many stars on hn?
Just because of a hype?
Yes this is strange. There's nothing of substance here that hasn't been repeated many times before. BUT...I did click it out of interest. So maybe it just came at an opportune lull in these types of posts and during the inflection point of 4.6 and 5.3 release. Complete guess
Hi i5heu. Given that you seem to use AI tools for generating images and audio versions of your posts, I hope it is not too rude to ask: how much of the post was drafted, written or edited with AI?
The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
If this was written with help of AI, I'd personally appreciate a small notice above the blog post. If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
Hi raphman,
i have written this text by myself except like 2 or 3 sentences which i iterated with an LLM to nail down flow and readability. I would interpret that as completely written by me.
> The suggestions you make are all sensible but maybe a little bit generic and obvious. Asking ChatGPT to generate advice on effectively writing quality code with AI generates a lot of similar suggestions (albeit less well written).
Before i wrote this text, i also asked Gemini Deep Research but for me the results where too technical and not structural or high level as i describe them here. Hence the blogpost to share what i have found works best.
> If not, I'd suggest to augment the post with practical examples or anecdotal experience. At the moment, the target group seems to be novice programmers rather than the typical HN reader.
I have pondered the idea and also wrote a few anecdotal experiences but i deleted them again because i think it is hard to nail the right balance down and it is also highly depended on the project, what renders examples a bit useless.
And i also kind of like the short and lean nature of it the last few days when i worked on the blogpost. I might will make a few more blogposts about that, that will expand a few points.
Thank you for your feedback!
First article about writing code with AI i can get behind 100%. Stuff i already do, stuff i've thought about doing, and at ideas i've never thought doing ("Mark code review levels" especially is a _great_ idea)
All this boils down to is that AI wins when it amplifies engineers, not replaces them. And the best code still comes from devs who ultrathink.
You must know the stack, architecture, and approve manually. Otherwise, at this stage of AI development, the code becomes unmaintainable.
I want to give a try to gsd + open code + Cerebras code. Any experience?
attention is all AI needs
I don’t understand the interest in “quality code.” I never need to look at the code itself. I just make sure it runs right.
It makes it easier to make sure it runs right. Code that is easier to make sure is quality code. Code that is hard to make sure is not quality code.
In her defence, I use most of those strategies myself as well...
> Use strict linting and formatting rules to ensure code quality and consistency. This will help you and your AI to find issues early.
I've always advocated for using a linter and consistent formatting. But now I'm not so sure. What's the point? If nobody is going to bother reading the code anymore I feel like linting does not matter. I think in 10 years a software application will be very obfuscated implementation code with thousands of very solidly documented test cases and, much like compiled code, how the underlying implementation code looks or is organized won't really matter
That's the opposite. I've never read and re-read code more than i do today. The new hires generate 50 more code than they use to, and you _have_ to check it or have compounding production issues (been there, done that). And the errors can now be anywhere, when before you more or less knew what the person writing code is thinking and can understand why some errors are made. LLMs errors could hide _anywhere_, so you have to check it all.
Isn't that a losing proposition? Or do you get 50 times the value out of it too? In my experience the more verbose the code is, the less thought out it is. Lots of changes? Cool, now polish some more and come back when it's below 100 lines change, excluding tests and docs. I don't dare touch it before.
1 reply →
You don't have to. How else are the new hires going to learn the downsides of outputting so much unreadable BS?
They serve as guardrails for agents to not do stupid things.
If your goal is for AI to write code that works, is maintainable and extensible, you have to include as many deterministic guardrails as possible.
How to write good code with AI -> put in as much effort as you did before on 20% more code than you used to work with.
AI slop article. Just show me the prompt.
That sounds like the advice of someone who doesn't actually write high-quality code. Perhaps a better title would be "how to get something better than pure slop when letting a chatbot code for you" - and then it's not bad advice I suppose. I would still avoid such code if I can help it at all.
This take is pretty uncharitable. I write high quality code, but also there's a bunch of code that could be useful, but that I don't write because it's not worth the effort. AI unlocks a lot of value in that way. And if there's one thing my 25 years as a software engineer has taught me is that while code quality and especially system architecture matter a lot, being super precious about every line of code really does not.
Don't get me wrong, I do think AI coding is pretty dangerous for those without the right expertise to harness it with the right guardrails, and I'm really worried about what it will mean for open source and SWE hiring, but I do think refusing to use AI at this point is a bit like the assembly programmer saying they'll never learn C.
Man, you are really missing out of the biggest revolution of my life.
This is the opinion of someone who has not tried to use Claude Code, in a brand new project with full permissions enabled, and with a model from the last 3 months.
People have been saying "the models from (recent timeframe) are so much better than the old ones, they solve all the problems" for years now. Since GPT-4 if not earlier. Every single time, those goalposts have shifted as soon as the next model came out. With such an abysmal track record, it's not reasonable to expect people to believe that this time the tool actually has become good and that it's not just hype.
1 reply →
This is a fading but common sentiment on hacker news.
There’s a lot of engineers who will refuse to wake up to the revolution happening in front of them.
I get it. The denialism is a deeply human response.
17 replies →
Claude code is great at figuring out legacy code! I dont get the «for new systems only» idea, myself.
> in a brand new project
Must be nice. Claude and Codex are still a waste of my time in complex legacy codebases.
2 replies →
Can you be specific? You didn't provide any constructive feedback, whatsoever.
The article did not provide a constructive suggestion on how to write quality code, either. Nor even empirical proof in the form of quality code written by LLMs/agents via the application of those principles.
1 reply →
Look up luddites on Wikipedia, might be too deep to see the similarities though.
So, I went and did that:
https://en.wikipedia.org/wiki/Luddite
> workers who opposed the use of certain types of automated machinery due to concerns relating to worker pay and output quality... Luddites were not opposed to the use of machines per se (many were skilled operators in the textile industry); they attacked manufacturers who were trying to circumvent standard labor practices of the time.
I heard that about NFTs not long ago.
[dead]
[dead]
TLDR: Know what you are doing and outsource the typing to LLM
[dead]
How to write quality code with AI? Don't let it write the code.
[dead]
[flagged]