Comment by darkwater
14 hours ago
This. In my case I do write from time to time tickets with an LLM but it's always after a long exploratory session with Claude Code, when I go back and forth checking possibilities and gathering data, and then I tell it to create a ticket with the info gathered so far. But even in that case I tend to edit it because I don't like the style or add some useless data that I want to remove.
It sounds like you'd save a lot of time if you just didn't use the LLM.
Don't discount the value in rubberducking with an AI.
They write shit code, but but can be prompted to highlight common failures in certain proposals.
For example, I am planning a gateway now, and the ChatGPT correctly pointed out many common vulnerabilities that occur in such a product, all of which I knew but may not have remembered while coding, like request smuggling.
It missed a few, but that's okay too, because I have a more comprehensive list written down than I would have had if I rubber ducked with an actual rubber duck.
If I finally write this product, my product spec has a list of warnings.
Not really, the exploratory phase is (probably) much faster with Claude Code that on my own. Writing a well specified ticket is very, very time consuming. With Claude Code for me it's way much easier to branch off and follow the "what if actually...?" and challenging some knowledge that - if I had to do it manually - I will just take for granted during ticket writing. Because if I'm "sure enough" that a fact is what I recall it to be, then I just don't check and trust my memory. When paired with an LLM, that threshold goes up, and if I'm not 101% sure about something, I will send Claude Code fetch that info in some internal artifact (i.e. code in a repository, actual state of the infra etc) and then take the next decision.
But that would mean they would have do a lot more work in the same amount of time.