← Back to context

Comment by mitthrowaway2

1 month ago

If it works for you, then it's a good method, but in my opinion the most transparent way to avoid a false sense of precision for time estimation (as with all else) is by explicitly including error bars, rather than changing the units-of-measure.

Error bars are complicated, and who's to say how large they should be? It winds up being a lot of pointless arguing over arbitrary precision.

The Fibonnaci sequence of point values has wound up just being a lot simpler for most people, as it encapsulates both size and error, since error tends to grow proportionally with size.

I.e. nobody is arguing over whether it's 10h +/- 1h, versus 12h +/- 1h, versus 12h +/- 2h, versus 11h +/- 3h. It's all just 5 points, or else 8 points, or else 13 points. It avoids discussion over any more precision than is actually reliably meaningful.

  • I worked on a product that was built around planning an estimation with ranged estimates (2-4h, 1-3d, etc)

    2-12d conveys a very different story than 6-8d. Are the ranges precise? Nope, but they're useful in conveying uncertainty, which is something that gets dropped in any system that collapses estimates to a single point.

    That said, people tend to just collapse ranges, so I guess we all lose in the end.

    • > 2-12d conveys a very different story than 6-8d.

      In agile, 6-8d is considered totally reasonable variance, while 2-12d simply isn't permitted. If that's the level of uncertainty -- i.e. people simply can't decide on points -- you break it up into a small investigation story for this sprint, then decide for the next sprint whether it's worth doing once you have a more accurate estimate. You would never just blindly decide to do it or not if you had no idea if it could be 2 or 12 days. That's a big benefit of the approach, to de-risk that kind of variance up front.

      7 replies →