Comment by juanpabloaj
1 month ago
These past weeks I finally organized some ideas I'd been sitting on and wrote two posts:
From Agentic Reasoning to Deterministic Scripts: on why AI agents shouldn't reason from scratch on every repeated task, and how execution history could compile into deterministic automations
https://juanpabloaj.com/2026/03/08/from-agentic-reasoning-to...
The silent filter: on cognitive erosion as a quieter, more probable civilizational risk than a catastrophic event
Re: the silent filter, I'm reminded of the McLuhan quote:
"Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms. The machine world reciprocates man's love by expediting his wishes and desires, namely, in providing him with wealth."
Thanks for the quote. Reading it, I can feel how a Tsutomu Nihei or Giger atmosphere envelops me.
Do you have examples of the task maturation cycle? I'm not sure how it would work for tasks like extracting structured data from images. It seems it could only work for tasks that can be scripted and wouldn't work well for tasks that need individual reasoning in every instance.
No practical code example, sorry. The post is based on my own experience using agents, and I haven't reached a reusable generalization yet.
That said, two cases where I noticed the pattern:
Meal planning: I had a weekly ChatGPT task that suggested dinner options based on nutritional constraints and generated a shopping list (e.g. two dinners with 100g of chicken -> buy 200g). After a few iterations, it became clear that with a fixed set of recipes and their ingredients, a simple script generating combinations was enough. The agent's reasoning had already done its job — it helped me understand the problem well enough to replace itself.
QA exploration: I was using an agent to explore a web app as a QA tester. It took several minutes per run. After some iterations, the more practical path was having it log its explorations to a file, then derive automated tests from that log. The agent still runs occasionally, but the tests run frequently and cheaply.
Regarding your point about tasks that need individual reasoning every time — I think you're right, and that's actually the core of the idea. Not every task matures into a script. Extracting structured data from images probably stays deliberative if the images vary significantly. The cycle only applies to tasks that, after enough repetitions, reveal a stable pattern. The agent itself is what helps you discover whether that pattern exists.