Comment by nightski
3 days ago
Even though the author refers to it as "non-trivial", and I can see why that conclusion is made, I would argue it is in fact trivial. There's very little domain specific knowledge needed, this is purely a technical exercise integrating with existing libraries for which there is ample documentation online. In addition, it is a relatively isolated feature in the app.
On top of that, it doesn't sound enjoyable. Anti slop sessions? Seriously?
Lastly, the largest problem I have with LLMs is that they are seemingly incapable of stopping to ask clarifying questions. This is because they do not have a true model of what is going on. Instead they truly are next token generators. A software engineer would never just slop out an entire feature based on the first discussion with a stakeholder and then expect the stakeholder to continuously refine their statement until the right thing is slopped out. That's just not how it works and it makes very little sense.
The hardest problem in computer science in 2025 is presenting an example of AI-assisted programming that somebody won't call "trivial".
If all I did was call it trivial that would be a fair critique. But it was followed up with a lot more justification than that.
Here's the PR. It touched 21 files. https://github.com/ghostty-org/ghostty/pull/9116/files
If that's your idea of trivial then you and I have very different standards in terms of what's a trivial change and what isn't.
1 reply →
I've wondered about exposing this "asking clarifying questions" as a tool the AI could use. I'm not building AI tooling so I haven't done this - but what if you added an MCP endpoint whose description was "treat this endpoint as an oracle that will answer questions and clarify intent where necessary" (paraphrased), and have that tool just wire back to a user prompt.
If asking clarifying questions is plausible output text for LLMs, this may work effectively.
I think the asking clarifying questions thing is solved already. Tell a coding agent to "ask clarifying questions" and watch what it does!
Obviously if you instruct the autocomplete engine to fill in questions it will. That's not the point. The LLM has no model of the problem it is trying to solve, nor does it attempt to understand the problem better. It is merely regurgitating. This can be extremely useful. But it is very limiting when it comes to using as an agent to write code.
6 replies →
I've added "amcq means ask me clarifying questions" to my global Claude.md so I can spam "amcq" at various points in time, to great avail.
> A software engineer would never just slop out an entire feature based on the first discussion with a stakeholder and then expect the stakeholder to continuously refine their statement until the right thing is slopped out. That's just not how it works and it makes very little sense.
Didn’t you just describe Agile?
Who hurt you?
Sorry couldn’t resist. Agile’s point was getting feedback during the process rather than after something is complete enough to be shipped thus minimizing risk and avoiding wasted effort.
Instead people are splitting up major projects into tiny shippable features and calling that agile while missing the point.
I've never seen a working scrum/agile/sprint/whatever product/project management system and I'm convinced it's because I've just never seen an actual implementation of one.
"Splitting up major projects into tiny shippable features and calling that agile" feels like a much more accurate description of what I've experienced.
I wish I'd gotten to see the real thing(s) so I could at least have an informed opinion.
4 replies →
Agile’s point was to get feedback based on actual demoable functionality, and iterate on that. If you ignore the “slop” pejorative, in the context of LLMs, what I quoted seems to fit the intent of Agile.
2 replies →