← Back to context

Comment by mavamaarten

17 hours ago

Yup. I would never be able to give my Jira tickets to an LLM because they're too damn vague or incomplete. Getting the requirements first needs 4 rounds of lobbying with all stakeholders.

We had a client who'd create incredibly detailed Jira tickets. Their lead developer (also their only developer) would write exactly how he'd want us to implement a given feature, and what the expected output would be.

The guy is also a complete tool. I'd point out that what he described wasn't actually what they needed, and that there functionality was ... strange and didn't actually do anything useful. We'd be told to just do as we where being told, seeing as they where the ones paying the bills. Sometimes we'd read between the lines, and just deliver what was actually needed, then we'd be told just do as we where told next time, and they'd then use the code we wrote anyway. At some point we got tired of the complaining and just did exactly as the tasks described, complete with tests that showed that everything worked as specified. Then we where told that our deliveries didn't work, because that wasn't what they'd asked for, but couldn't tell us where we misunderstood the Jira task. Plus the tests showed that the code functioned as specified.

Even if the Jira tasks are in a state where it seems like you could feed them directly to an LLM, there's no context (or incorrect context) and how is a chatbot to know that the author of the task is a moron?

  • Every time I've received overly detailed JIRA tickets like this it's always been significantly more of a headache than the vague ones from product people. You end up with someone with enough tech knowledge to have an opinion, but separated enough from the work that their opinions don't quite work.

    • Same, I think there's an idealistic belief in people who write those tickets that something can be perfectly specified upfront.

      Maybe for the most mundane, repetitive tasks that's true.

      But I'd argue that the code is the full specification, so if you're going to fully specify it you might as well just write the code and then you'll actually have to be confronted with your mistaken assumptions.

  • Maybe you'll appreciate having it pointed out to you: you should work on your usage of "where" vs "were".

  • > how is a chatbot to know that the author of the task is a moron?

    Does it matter?

    The chatbot could deliver exactly what was asked for (even if it wasn't what was needed) without any angst or interpersonal issues.

    Don't get me wrong. I feel you. I've been there, done that.

    OTOH, maybe we should leave the morons to their shiny new toys and let them get on with specifying enough rope to hang themselves from the tallest available structure.

Who says an LLM can’t be taught or given a system prompt that enables them to do this?

Agentic AI can now do 20 rounds of lobbying with all stake holders as long as it’s over something like slack.

A significant part of my LLM workflow involves having the LLM write and update tickets for me.

It can make a vague ticket precise and that can be an easy platform to have discussions with stakeholders.

  • I like this use of LLM because I assume both the developer and ticket owner will review the text and agree to its contents. The LLM could help ensure the ticket is thorough and its meaning is understood by all parties. One downside is verbosity, but the humans in the loop can edit mercilessly. Without human review, these tickets would have all the downsides of vibe coding.

    Thank you for sharing this workflow. I have low tolerance for LLM written text, but this seems like a really good use case.

  • Wait until you learn that the people on the other side of your ticket updates are also using LLMs to respond. It's LLMs talking to LLMs now.

    • Wait until you learn that most people's writing skills are that of below LLMs, so it's an actual tangible improvement (as long as you review the output for details not being missed, of course)

      1 reply →

    • The desired result is coming to a documented agreement on an interaction, not some exercise in argument that has to happen between humans.

      I find having an LLM create tickets for itself to implement to be an effective tool that I rarely have to provide feedback for at all.

      This seems like greybeards complaining that people who don't write assembly by hand.

      6 replies →

  • A significant part of my workflow is getting a ticket that is ill-defined or confused and rewriting it so that it is something I can do or not do.

    From time to time I have talked over a ticket with an LLM and gotten back what I think is a useful analysis of the problem and put it into the text or comments and I find my peeps tend to think these are TLDR.

    • Yeah, most people won't read things. At the beginning of my career I wrote emails that nobody read and then they'd be upset about not knowing this or that which I had already explained. Such is life, I stopped writing emails.

      An LLM will be just as verbose as you ask it to be. The default response can be very chatty, but you can figure out how to ask it to give results in various lengths.

Claude Code et al. asks clarifying questions in plan mode before implementing. This will eventually extend to jira comments

  • You think the business line stakeholder is going to patiently hang out in JIRA, engaging with an overly cheerful robot that keeps "missing the point" and being "intentionally obtuse" with its "irrelevant questions"?

    This is how most non-technical stakeholders feel when you probe for consistent, thorough requirements and a key professional skill for many more senior developers and consultants is in mastering the soft skills that keep them attentive and sufficiently helpful. Those skills are not generic sycophancy, but involve personal attunement to the stakeholder, patience (exercising and engendering), and cycling the right balance between persistence and de-escalation.

    Or do you just mean there will be some PM who acts as proxy between for the stakeholder on the ticket, but still needs to get them onto the phone and into meetings so the answers can be secured?

    Because in the real world, the prior is outlandish and the latter doesn't gain much.

    • Businesses do whatever’s cheap. AI labs will continue making their models smarter, more persuasive. Maybe the SWE profession will thrive/transform/get massacred. We don’t know.