Comment by tlogan
21 hours ago
At the end of the day, it comes down to one thing: knowing what you want. And AI can’t solve that for you.
We’ve experimented heavily with integrating AI into our UI, testing a variety of models and workflows. One consistent finding emerged: most users don’t actually know what they want to accomplish. They struggle to express their goals clearly, and AI doesn’t magically fill that gap—it often amplifies the ambiguity.
Sure, AI reduces the learning curve for new tools. But paradoxically, it can also short-circuit the path to true mastery. When AI handles everything, users stop thinking deeply about how or why they’re doing something. That might be fine for casual use, but it limits expertise and real problem-solving.
So … AI is great—but the current diarrhea of “let’s just add AI here” without thinking through how it actually helps might be a sign that a lot of engineers have outsourced their thinking to ChatGPT.
> They struggle to express their goals clearly, and AI doesn’t magically fill that gap—it often amplifies the ambiguity.
One surprising thing I've learned is that a fast feedback loop like this:
1. write a system prompt 2. watch the agent do the task, observe what it gets wrong 3. update the system prompt to improve the instructions
is remarkably useful in helping people write effective system prompts. Being able to watch the agent succeed or fail gives you realtime feedback about what is missing in your instructions in a way that anyone who has ever taught or managed professionally will instantly grok.
What I've found with agents is that they stray from the task and even start to flip flop on implementations, going back and forth on a solution. They never admit they don't know something and just brute force a solution even though the answer cannot be found without trial and error or actually studying the problem. I repeatedly fall back to reading the docs and just finishing the job myself as the agent just does not know what to do.
In the process of finding out what customers or a PM/PO wants, developers ask clarifying questions given an ambiguous start. An AI could be made to also ask these questions. It may do this reasonably better than some engineers by having access to a ton of questions in its training data.
By using an AI, you might be making a reasonable guess that your problem has been solved before, but maybe not the exact details. This is true for a lot of technical tasks as I don't need to reinvent database access from first principles for every project. I google ORMs or something in my particular language and consider the options.
Even if the AI doesn't give you a direct solution, it's still a prompt for your brain as if you were in a conversation.
I have also experienced this in the specific domain of well-learned idiots finding pseudo-explanations for why a technical choice should be taken, despite not knowing anything about the topic.
I have witnessed a colleague look up a component datasheet on ChatGPT and repeating whatever it told him (despite the points that it made weren't related to our use case). The knowledge monopoly in about 10 years when the old-guard programming crowd finally retires and/or unfortunately dies will be in the hands of people that will know what they don't know and be able to fill the gaps using appropriate information sources (including language models). The rest will probably resemble Idiocracy on a spectrum from frustrating to hilarious.