← Back to context

Comment by presentation

6 days ago

Coding agents should take you through a questionnaire before working. Break down what you are asking for into chunks, point me to key files that are important for this change, etc etc. I feel like a bit of extra prompting would help a lot of people get much better results rather than expecting people to know the arcane art of proompting just by looking at a chat input.

I am just a muggle, but I have been using Windsurf for months and this is the only way for me to end up with working code.

A significant portion of my prompts are writing and reading from .md files, which plan and document the progress.

When I start a new feature, it begins with: We need to add a new feature X that does ABC, create a .md in /docs to plan this feature. Ask me questions to help scope the feature.

I then manually edit the feature-x.md file, and only then tell the tool to implement it.

Also, after any major change, I say: Add this to docs/current_app_understanding.md.

Every single chat starts with: Read docs/current_app_understanding.md to get up to speed.

The really cool side benefit here is that I end up with solid docs, which I admittedly would have never created in the past.

You can ask it to do this, in your initial prompt encourage it to ask questions before implementing if it is unsure. Certain models like o4 seem to do this more by default rather than Claude that tends to try to do everything without clarifying

I mean if you ask Claude code to walk through what you should do next with you it'll ask lots of great questions and write you a great TODO.md file that it'll then walk down and check the boxes on.

You don't exactly need to know prompting, you just need to know how to ask the AI to help you prompt it.