Comment by recursive

13 hours ago

If we had a way of measuring velocity, we'd already be using that instead of tokens.

We had a way of measuring velocity, but who cares about estimating stories when we could be spinning up more agents? Burn a bunch of tokens and those stories will be DONE before you could even find your planning poker cards!

  • I've lived through a bunch of initiatives about improving planning and estimation. None of them turned into a stable process that worked for anyone. I don't know if I can extrapolate from that, but it gives me an inclination that no one really trusts anything that comes out of task estimation. Which would be why we're looking for more objective metrics like token burn rate. No room for argument - tokens are tokens!

  • This but unironically.

    The speed of generating code is now faster than the time it takes to plan and estimate how long it will take to generate the code.

    • Generating more code faster might be useful, but there have to be some other constraints on it.

      Using this paradigm, we can achieve unlimited bugs sooner than ever before.

      1. To fix a bug, always add code, never remove. 2. Whenever you fix one bug, always introduce at least two new ones.

What do you mean? You get story points for free with jira. That’s like the one metric every place uses.

  • Story points are unicorn dust that crumbles under any attempt of serious optimization. The fundamental problem is that SP is not an objectively defined metric. If we come under serious pressure to improve velocity measured by SP, there's nothing to stop that initiative from trickling down into the SP estimation/measurement. SP works fine as long as you don't look too closely at it.