Comment by SkyPuncher
20 days ago
I loathe using AI in a greenfield project. There are simply too many possible paths, so it seems to randomly switch between approaches.
In a brownfield code base, I can often provide it reference files to pattern match against. So much easier to get great results when it can anchor itself in the rest of your code base.
The trick for greenfield projects is to use it to help you design detailed specs and a tentative implementation plan. Just bounce some ideas off of it, as with a somewhat smarter rubber duck, and hone the design until you arrive at something you're happy with. Then feed the detailed implementation plan step by step to another model or session.
This is a popular workflow I first read about here[1].
This has been the most useful use case for LLMs for me. Actually getting them to implement the spec correctly is the hard part, and you'll have to take the reigns and course correct often.
[1]: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
Here’s my workflow, it takes that a few steps further: https://taoofmac.com/space/blog/2025/05/13/2230
This seems like a good flow! I end up adding a "spec" and "todo" file for each feature[1]. This allows me to flesh out some of the architectural/technical decisions in advance and keep the LLM on the rails when the context gets very long.
[1] https://notes.jessmart.in/My+Writings/Pair+Programming+with+...
1 reply →
The trouble occurs when the brownfield project is crap already.