Comment by JBorrow

11 hours ago

Maybe I'm entirely out of the loop and a complete idiot, but I am really not sure at all what people mean when they talk about this stuff. I use AI agents every day, but people who say they spend 'most of my time writing agents and tools' must be living in an absolutely different world.

I don't understand how people are making anything that has any level of usefulness without a feedback loop with them at the center. My agents often can go off for a few minutes, maybe 10, and write some feature. Half of the time they will get it wrong, I realize I prompted wrong, and I will have to re-do it myself or re-do the prompt. A quarter of the time, they have no idea what they're doing, and I realize I can fix the issue that they're writing a thousand lines for with a single line change. The final quarter of the time I need to follow up and refine their solution either manually or through additional prompting.

That's also only a small portion of my time... The rest is curating data (which you've pretty much got to do manually), writing code by hand (gasp!), working on deployments, and discussing with actual people.

Maybe this is a limitation of the models, but I don't think so. To get to the vision in my head, there needs to be a feedback loop... Or are people just willing to abdicate that vision-making to the model? If you do that, how do you know you're solving the problem you actually want to?