Comment by abustamam
1 day ago
I take this concept and I meta-prompt it even more.
I have a road map (AI generated, of course) for a side project I'm toying around with to experiment with LLM-driven development. I read the road map and I understand and approve it. Then, using some skills I found on skills.sh and slightly modified, my workflow is as such:
1. Brainstorm the next slice
It suggests a few items from the road map that should be worked on, with some high level methodology to implement. It asks me what the scope ought to be and what invariants ought to be considered. I ask it what tradeoffs could be, why, and what it recommends, given the product constraints. I approve a given slice of work.
NB: this is the part I learn the most from. I ask it why X process would be better than Y process given the constraints and it either corrects itself or it explains why. "Why use an outbox pattern? What other patterns could we use and why aren't they the right fit?"
2. Generate slice
After I approve what to work on next, it generates a high level overview of the slice, including files touched, saved in a MD file that is persisted. I read through the slice, ensure that it is indeed working on what I expect it to be working on, and that it's not scope creeping or undermining scope, and I approve it. It then makes a plan based off of this.
3. Generate plan
It writes a rather lengthy plan, with discrete task bullets at the top. Beneath, each step has to-dos for the llm to follow, such as generating tests, running migrations, etc, with commit messages for each step. I glance through this for any potential red flags.
4. Execute
This part is self explanatory. It reads the plan and does its thing.
I've been extremely happy with this workflow. I'll probably write a blog post about it at some point.
If you want to have some fun, experiment with this: add a step (maybe between 3 and 4):
3.5 Prove
Have the LLM demonstrate, through our current documentation and other sources of facts, that the planned action WILL work correctly, without failure. Ask it to enumerate all risks and point out how the plan mitigates each risk. I've seen on several occasions, the LLM backtrack at this step and actually come up with clever so-far unforeseen error cases.
That's a good thought experiment!
This is a super helpful and productive comment. I look forward to a blog post describing your process in more detail.
This dead internet uncanny (sarcasm?) valley is killing me.
Are you suggesting HN is now mostly bots boosting pro-AI comments? That feels like a stretch. Disagreement with your viewpoint doesn't automatically mean someone is a bot. Let's not import that reflex from Twitter.
2 replies →