Comment by GreenWatermelon
18 days ago
I'm pretty damn sure the parent, by saying "writing code" meant the physical act of pushing down buttons to produce text, not the problem solving process that preceeds writing said code.
18 days ago
I'm pretty damn sure the parent, by saying "writing code" meant the physical act of pushing down buttons to produce text, not the problem solving process that preceeds writing said code.
This. Most people defer the solving of hard problems to when they write the code. This is wrong, and too late to be effective. In one way, using agents to write code forces the thinking to occur closer to the right level - not at the code level - but in another way, if the thinking isn’t done or done correctly, the agent can’t help.
Disagree. No plan survives first contact.
I can spend all the time I want inside my ivory tower, hatching out plans and architecture, but the moment I start hammering letters in the IDE my watertight plan suddenly looks like Swiss cheese: constraints and edge cases that weren't accounted for during planning, flows that turn out to be unfeasible without a clunky implementation, etc...
That's why Writing code has become my favorite method of planning. The code IS the spec, and English is woefully insufficient when it comes to precision.
This makes Agentic workflows even worse because you'll only your architectural flaws much much later down the process.
I also think this is why AI works okay-ish on tiny new greenfield webapps and absolutely doesn't on large legacy software.
You can't accurately plan every little detail in an existing codebase, because you'll only find out about all the edge cases and side effects when trying to work in it.
So, sure, you can plan what your feature is supposed to do, but your plan of how to do that will change the minute you start working in the codebase.
Yeah, I think this is the fundamental thing I'm trying to get at.
If you think through a problem as you're writing the code for it, you're going to end up the wrong creek because you'll have been furiously head down rowing the entire time, paying attention to whatever local problem you were solving or whatever piece of syntax or library trivia or compiler satisfaction game you were doing instead of the bigger picture.
Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on; but the problem with doing that without an agent is then it becomes boring. You've basically laid out a plan ahead of time and now you've just got to execute on the plan, which means (even though you might even fairly often revise the plan as you learn unknown unknowns or iterate on the design) that you've kind of sucked all the fun and discovery out of the code rights process. And it sort of means that you've essentially implemented the whole thing twice.
Meanwhile, with a coding agent, you can spend all the time you like building up that initial software design document, or specification, and then you can have it implement that. Basically, you can spend all the time in your hammock thinking through things and looking ahead, but then have that immediately directly translated into pull requests you can accept or iterate on instead of then having to do an intermediate step that repeats the effort of the hammock time.
Crucially, this specification or design document doesn't have to remain static. As you would discover problems or limitations or unknown unknowns, you can revise it and then keep executing on it, meaning it's a living documentation of your overall architecture and goals as they change. This means that you can really stay thinking about the high level instead of getting sucked into the low level. Coding agents also make it much easier to send something off to vibe out a prototype or explore the code base of a library or existing project in detail to figure out the feasibility of some idea, meaning that the parts that traditionally would have been a lot of effort to verify that what your planning makes sense have a much lower activation energy. so you're more likely to actually try things out in the process of building a spec
I believe programming languages are the better language for planning architecture, the algorithms, the domain model, etc... compared to English.
The way I develop mirrors the process of creating said design document. I start with a high level overview, define what Entities the program should represent, define their attributes, etc... only now I'm using a more specific language than English. By creating a class or a TS interface with some code documentation I can use my IDEs capabilities to discover connections between entities.
I can then give the code to an LLM to produce a technical document for managers or something. It'll be a throwaway document because such documents are rarely used for actual decision making.
> Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on;
I do this with code, and the IDE is much better than MS Word or whatevah at detecting my logical inconsistencies.
1 reply →
Ah, perhaps. I understood it a little more broadly to include everything beyond pseudocode, rather than purely being able to use your fingers. You can solve a problem with pseudocode, and seasoned devs won't have much of an issue converting it to actual code, but it's not a fun process for everyone.
Yeah I basically write pseudocode and let the ai take it from there.
But this is exactly my point: if your "code" is different than your "pseudocode", something is wrong. There's a reason why people call Lisp "executable pseudocode", and it's because it shrinks the gap between the human-level description of what needs to happen and the text that is required to actually get there. (There will always be a gap, because no one understands the requirements perfectly. But at least it won't be exacerbated by irrelevant details.)
To me, reading the prompt example half a dozen levels up, reminds me of Greenspun's tenth rule:
> Any sufficiently complicated C++ program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. [1]
But now the "program" doesn't even have formal semantics and isn't a permanent artifact. It's like running a compiler and then throwing away the source program and only hand-editing the machine code when you don't like what it does. To me that seems crazy and misses many of the most important lessons from the last half-century.
[1]: https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule (paraphrased to use C++, but applies equally to most similar languages)
Replying to sibling comment:
the problem is that you actually have to implement that high level DSL to get Lisp to look like that, and most DSLs are not going to be able to be as concise and abstract as a natural language description of what you want, and then just making sure it resulted in the right thing — which then I'd want to use AI for, to write that initial boilerplate, from a high level description of what the DSL should do.
And a Lisp macro DSL is not going to help with automating refactors, automatically iterating to take care of small compiler issues or minor bugs without your involvement so you can focus on the overall goal, remembering or discovering specific library APIs or syntax, etc.