Comment by petekoomen

20 hours ago

> They struggle to express their goals clearly, and AI doesn’t magically fill that gap—it often amplifies the ambiguity.

One surprising thing I've learned is that a fast feedback loop like this:

1. write a system prompt 2. watch the agent do the task, observe what it gets wrong 3. update the system prompt to improve the instructions

is remarkably useful in helping people write effective system prompts. Being able to watch the agent succeed or fail gives you realtime feedback about what is missing in your instructions in a way that anyone who has ever taught or managed professionally will instantly grok.

What I've found with agents is that they stray from the task and even start to flip flop on implementations, going back and forth on a solution. They never admit they don't know something and just brute force a solution even though the answer cannot be found without trial and error or actually studying the problem. I repeatedly fall back to reading the docs and just finishing the job myself as the agent just does not know what to do.

  • I think you're missing step 3! A key part of building agents is seeing where they struggling and improving performance in either the prompting or the environment.

    There are a lot of great posts out there about how to structure an effective prompt. One thing they all agree on is to break down reasoning steps the agent should follow relevant to your problem area. I think this is relevant to what you said about brute forcing a solution rather than studying the problem.

    In the agent's environment there's a fine balance to achieve between enough tools and information to solve any appropriate task, and too many tools/information that it'll frequently get lost down the wrong path and fail to come up with a solution. This is also something that you'll iteratively improve by observing the agent's behavior and adapting.