Comment by alwillis
3 days ago
Are we at a stage where an LLM (assuming it doesn't find the solution on its own, which is ok) would come back to me and say, listen, I've tried your approach but I've run into this particular difficulty, can you advise me what to do, or would it just write incorrect code that I would then have to carefully read and realise what the challenge is myself?
Short answer: Maybe.
You can tell Claude Code under what conditions it should check in with you. Having tests it can run to verify if the code it wrote works helps a lot; in some cases, if a unit test fails, Claude can go back and fix the error on its own.
Providing an example (where it makes sense) also helps a lot.
Anthropic has good documentation on helpful prompting techniques [1].
[1]: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...
No comments yet
Contribute on Hacker News ↗