← Back to context

Comment by Xevion

1 day ago

>I have actually had great success with agentic coding by sitting down with a LLM to tell it what I'm trying to build and have it be socratic with me, really trying to ask as many questions as it can think of to help tease out my requirements.

Just curious, could you expand on the precise tools or way you do this?

For example, do you use the same well-crafted prompt in Claude or Gemini and use their in-house document curation features, or do you use a file in VS Code with Copilot Chat and just say "assist me in writing the requirements for this project in my README, ask questions, perform a socratic discussion with me, build a roadmap"?

You said you had 'great success' and I've found AI to be somewhat underwhelming at times, and I've been wondering if it's because of my choice of models, my very simple prompt engineering, or if my inputs are just insufficient/too complex.

I use Aider with a very tuned STYLEGUIDE.md and AI rules document that basically outlines this whole process so I don't have to instruct it every time. My preferred model is Gemini 2.5 Pro, which is definitely by far the best model for this sort of thing (Claude can one shot some stuff about as well but for following an engineering process and responding to test errors, it's vastly inferior)

  • How do you find Aider compares to Claude code?

    • I like Aider's configurability, I can chain a lot of static analysis stuff together with it and have the model fix all of it, and I can have 2-4 aider windows open in a grid and run them all at once, not sure how that'd work with Claude Code. Also, aider managing everything with git commits is great.

      6 replies →