Comment by chucknthem
6 days ago
How are you writing your prompts? I usually break a feature down to smaller task level before I prompt an agent (claude code in my case) to do anything. Feature level is often too hard to prompt and specify in enough detail for it to get right.
So I'd say claude 4 agents today are at smart but fresh intern level of autonomy. You still have to do the high level planning and task break down, but it can execute on tasks (say requiring 10 - 200 lines of code excluding tests). Any asking it to write much more code (200+ lines) often require a lot of follow ups and disappointment.
This is the thing that gets me about LLM usage. They can be amazing revolutionary tech and yes they can also be nearly impossible to use right. The claim that they are going to replace this or that is hampered by the fact that there is very real skill required (at best) or just won't work most the time (at worst). Yes there are examples of amazing things, but the majority of things seem bad.
I have not had a ton of success getting good results out of LLMs but this feels like a UX problem. If there’s an effective way to frame a prompt why don’t we get a guided form instead of a single chat box input?
Coding agents should take you through a questionnaire before working. Break down what you are asking for into chunks, point me to key files that are important for this change, etc etc. I feel like a bit of extra prompting would help a lot of people get much better results rather than expecting people to know the arcane art of proompting just by looking at a chat input.
I am just a muggle, but I have been using Windsurf for months and this is the only way for me to end up with working code.
A significant portion of my prompts are writing and reading from .md files, which plan and document the progress.
When I start a new feature, it begins with: We need to add a new feature X that does ABC, create a .md in /docs to plan this feature. Ask me questions to help scope the feature.
I then manually edit the feature-x.md file, and only then tell the tool to implement it.
Also, after any major change, I say: Add this to docs/current_app_understanding.md.
Every single chat starts with: Read docs/current_app_understanding.md to get up to speed.
The really cool side benefit here is that I end up with solid docs, which I admittedly would have never created in the past.
You can ask it to do this, in your initial prompt encourage it to ask questions before implementing if it is unsure. Certain models like o4 seem to do this more by default rather than Claude that tends to try to do everything without clarifying
I mean if you ask Claude code to walk through what you should do next with you it'll ask lots of great questions and write you a great TODO.md file that it'll then walk down and check the boxes on.
You don't exactly need to know prompting, you just need to know how to ask the AI to help you prompt it.
I feel like when you prompt an LLM the LLM should take it almost as "what would the best possible prompt for this prompt be and then do that"...