Comment by float4
2 days ago
I should have read this 12h ago! This afternoon, I tried to create my first simple agent using LangChain. My aim was to repeatedly run a specific python analysis function and perform a binary search to find the optimal result, then compile the results into a markdown report and export it as a PDF.
However, I now realize that most of these steps don't require AI at all, let alone agents. I wrote the full algorithm (including the binary search!) in natural language for the LLM. And although it sometimes worked, the model often misunderstood and produced random errors out of the blue.
I now realize that this is not what agents are for. This problem didn't require any agentic behavior. It was just a fixed workflow, with one single AI step (generating a markdown report text).
Oh well, nothing wrong with learning the hard way.
That reminds me of another recent submission that seems relevant:
"Don’t let an LLM make decisions or execute business logic"
319 points, 168 comments, 1 day ago - https://news.ycombinator.com/item?id=43542259