After owning a product, I've developed a lot of sympathy for the people outside of engineering who have to put up with us. Engineers love to push back on estimates, believing that "when it's done" is somehow acceptable for the rest of the business to function. In a functioning org, there are lot of professionals depending on correct estimation to do their job.
For us, an accurate delivery date on a 6 month project was mandatory. CX needed it so they could start onboarding high priority customers. Marketing needed it so they could plan advertising collateral and make promises at conventions. Product needed it to understand what the Q3 roadmap should contain. Sales needed it to close deals. I was fortunate to work in a business where I respected the heads of these departments, which believe it or not, should be the norm.
The challenge wasn't estimation - it's quite doable to break a large project down into a series of sprints (basically a sprint / waterfall hybrid). Delays usually came from unexpected sources, like reacting to a must have interruption or critical bugs. Those you cannot estimate for, but you can collaborate on a solution. Trim features, push date, bring in extra help, or crunch. Whatever the decision, making sure to work with the other departments as colaborators was always beneficial.
I used to work in the semiconductor industry writing internal tools for the company. Hardware very rarely missed a deadline and software was run the same way.
Things rarely went to plan, but as soon as any blip occured, there'd be plans to trim scope, crunch more, or push the date with many months of notice.
Then I joined my first web SaaS startup and I think we didn't hit a single deadline in the entire time I worked there. Everyone thought that was fine and normal. Interestingly enough, I'm not convinced that's why we failed, but it was a huge culture shock.
You're saying it would be convenient for you to know the future. It would also be convenient for me. That said, if you haven't done very similar work in the past, it's very unlikely you'll know exactly how much time it will take.
In practice developers have to "handle" the people requesting hard deadlines. Introduce padding into the estimate to account for the unexpected. Be very specific against milestones to avoid expectation of the impossible. Communicate missed milestones proactively, and there will be missed milestones. You're given a date to feel safe. And sometimes you'll cause unnecessary crunch in order for a deadline you fought for to be met. Other times, you'll need to negotiate what to drop.
But an accurate breakdown of a project amounts to executing that project. Everything else is approximation and prone to error.
- If it's an internal project (like migrating from one vendor to another, with no user impact) then it takes as long as I can convince my boss it is reasonable to take.
- If it's a project with user impact (like adding a new feature) then it takes as long as the estimated ROI remains positive.
- If it's a project that requires coordination with external parties (like a client or a partner), then the sales team gets to pick the delivery date, and the engineering team gets to lie about what constitutes an MVP to fit that date.
This time around, we asked cursor to estimate giving PRD & codebase. It gave very detailed estimate. Currently in the process of getting it down to what leadership wants (as in the article). AI estimates much better & faster than us. We are bringing it down much faster than AI. Sometimes changing the PRD or prioritizing the flows & cutting down scope of MVP. Honestly AI is a great tool for estimation.
One thing I think is missing is an understanding of why there is such a top-down push for timelines: because saying "we aren't sure when this feature will be delivered" makes sales people look like they don't know what they are talking about. Which.... well.
They would much rather confidently repeat a date that is totally unfounded rubbish which will have to be rolled back later, because then they can blame the engineering team for not delivering to their estimate.
I'm a dev, not a salesperson, but let's be realistic. A company tells you "yeah we're interested in signing at $1M/yr, but we really need this feature, when will you have it by?", to which saying "eh we don't know - it'll be done when it's done" will lead to the company saying "ok well reach out when you have it, we can talk again then" (or just "eh ok then not a good fit sorry bye"), and in the meantime they'll go shopping around and may end up signing with someone else.
Having a promised date lets you keep the opportunity going and in some cases can even let you sign them there and then - you sign them under the condition that feature X will be in the app by date Y. That's waaaay better for business, even if it's tougher for engineers.
“Sign up and pay at least part of it now and we’ll prioritize the feature”.
I’ve seen enough instances of work being done for a specific customer that doesn’t then result in the customer signing up (or - once they see they can postpone signing the big contract by continuing to ask for “just one more crucial feature”, they continue to do so) to ever fall for this again.
Just to consider the opposite viewpoint, I sometimes wonder if it's not better that they do churn in that case.
Assuming the sales team is doing their job properly, there are other prospects who may not need that feature, and not ramming the feature in under time constraints will lead to a much better product.
Eventually, their feature will be built, and it will have taken the time that it needed, so they'll probably churn back anyway, because the product from the vendor they did get to ram their feature in is probably not very good.
Unless its the first time they are hearing about it, when a customer asks about a feature, sales should've done their homework and checked with the team doing the work to get a rough estimate instead of pulling a number out of their behinds.
If you hired someone to do some work on your house, and they refused to give an estimate, would you be happy?
If you had a deadline - say thanksgiving or something - and you asked “will the work be done by then” and the answer was “I’m not going to tell you” would you hire the person?
The no estimates movement has been incredibly damaging for Software Engineering.
Painting a wall has no “if then else”. You dont need to test to see if the wall has been painted.
I guess a fair analogy would be if the home owner just said “Make my home great and easy to use” by Thanksgiving without too many details, and between now ans thanksgiving refines this vision continuously, like literally changing the color choice half way or after fully painting a wall… then its really hard to commit.
If a home owner has a very specific list of things with no on the job adjustments, then usually you can estimate(most home contract work)
All software requests are somewhere in between former and latter, most often leaning towards the former scenario.
If work on a house was specified like a typical software project, no builder would even return your call.
"I'd like to have my roof reshingled, but with glass tiles and it should be in the basement, and once you are half way I'll change my mind on everything and btw, I'm replacing your crew every three days".
For any slightly complicated project on a house the estimate assumes everything goes right, which everyone knows it probably won't. It's just a starting point, not a commitment.
In Australia, an SDE + overhead costs say $1500 / work day, so 4 engineers for a month is about $100k. The money has to be allocated from budgets and planned for etc. Dev effort affects the financial viability and competitiveness of projects.
I feel like many employees have a kind of blind spot around this? Like for most other situations, money is a thing to be thought about and carefully accounted for, BUT in the specific case where it's their own days of effort, those don't feel like money.
Also, the software itself presumably has some impact or outcome and quite often dates can matter for that. Especially if there are external commitments.
The only approach that genuinely works for software development is to treat it as a "bet". There are never any guarantees in software development.
1. Think about what product/system you want built.
2. Think about how much you're willing to invest to get it (time and money).
3. Cap your time and money spend based on (2).
4. Let the team start building and demo progress regularly to get a sense of whether they'll actually be able to deliver a good enough version of (1) within time/budget.
If it's not going well, kill the project (there needs to be some provision in the contract/agreement/etc. for this). If it's going well, keep it going.
Doesn't this ignore the glaring difference between a plumbing task and a software task? That is, level of uncertainty and specification. I'm sure there are some, but I can't think of any ambiguous plumbing requirements on the level of what is typical from the median software shop.
But it's the reality of engineering. If reality is unacceptable, that's not reality's problem.
But the problem is, the sales world has its own reality. The reality there is that "we don't know when" really is unacceptable, and "unacceptable" takes the form of lost sales and lost money.
So we have these two realities that do not fit well together. How do we make them fit? In almost every company I've been in, the answer is, badly.
The only way estimates can be real is if the company has done enough things that are like the work in question. Then you can make realistic (rough) estimates of unknown work. But even then, if you assign work that we know how to do to a team that doesn't know how to do it, your estimates are bogus.
Sales gets fired (or not paid) for missing their estimates (quotas, forecasts) and often have little empathy for engineering being unable to estimate accurately.
Exactly. The principle to go by for estimates is finding a balance between time/scope/cost, and figuring out which aspects of the context affect which dimension is the first step.
The article resonates with me, but I think it misses some nuance when estimating in aggregate. My career has been primarily product engineering, let’s call it the top 2/3rds of the stack. I’ve been lead on multiple projects with the rough shape “we have this [conference/marketing event/partnership] on [date] and we want to ship [new feature].”
My job is to present an opinionated menu of functionality I believe we can have in place by that time. It should also include the potential scope cuts if things go wrong.
I agree that unknowns dominate the individual tasks, but when you bundle enough together the positive/negative surprises tend to balance out.
Without an idea of at least relative Eng costs of different features, it is impossible for leadership to correctly prioritize what we build and what we cut.
I cannot tell you with certainty that a given task will take N days, but I should be reasonably confident communicating what my team can accomplish in a quarter.
This is a great insight and something every engineer should reflect on in the context of their own orgs:
> estimates are not by or for engineering teams.
It's surprising the nuance and variety of how management decisions are made in different orgs, a lot depends on personalities, power dynamics and business conditions that the average engineer has almost no exposure to.
When you're asked for an estimate, you've got to understand who's asking and why. It got to the point in an org I worked for once that the VP had to explicitly put a moratorium on engineers giving estimates because those estimates were being taken by non-technical stakeholders of various stripes and put into decks where they were remixed and rehashed and used as fodder for resourcing tradeoff discussions at the VP and executive level in such a way as to be completely nonsensical and useless. Of course these tradeoff discussions were important, but the way to have them was not to go to some hapless engineer, pull an overly precise estimate based on a bunch of tacit assumptions that would never bear out in reality, and then hoist that information up 4 levels of management to be shown to leadership with a completely different set of assumptions and context. Garbage in, garbage out.
These days I think of engineering level of effort as something that is encapsulated as primarily an internal discussion for engineering. Outwardly the discussion should primarily be about scope and deadlines. Of course deadlines have their own pitfalls and nuance, but there is no better reality check for every stakeholder—a deadline is an unambiguous constraint that is hard to misinterpret. Sometimes engineers complain about arbitrary deadlines, and there are legitimate complaints if they are passed down without any due diligence or at least a credible gut check from competent folks, but on balance I think a deadline helps engineering more than it hurts as it allows us to demand product decisions, designs, and other dependencies land in a timely fashion. It also prevents over-engineering and second system syndrome, which is just as dangerous a form of scope creep as anything product managers cook up when the time horizon is long and there is no sense of urgency to ship.
I agree with most of things on this article with and additional caveat: estimates are also a function of who is going to do the work. If I have a team of 5 offshore devs who need hand holding, 2 seniors who are very skilled, and two mid level or juniors, how long something will take, what directions will be given, and even the best approach to choose can vary wildly depending on which subset of the team is going to be working on it. On top of all the other problems with estimates. This variance has degrees, but particularly when there are high-skilled on shore engineers and low skilled offshore ones, it leads to problems, and companies will begin to make it worse as they get more cost sensitive without understanding that the different groups of engineers aren't perfectly fungible.
And how many other parallel work streams are going. So many times I’ve estimated something to be “5” and it’s gone into my queue. Then people are wondering why it’s not done after “5” estimation units have passed and I’ve got “10” points worth of more high priority tasks and fires at every moment of my career
Estimation is an art, not a science. It's always going to be a judgement call by the engineers tasked with giving them to management. Taking all of the factors from this article and beyond can and should go into making that judgement call.
I always tell my teams just skip the middlemen and think of estimates as time from the jump. It's just easier that way. As soon as an estimate leaves an engineer's mouth, it is eagerly translated into time by everyone else at the business. That is all anyone else cares about. Better said - that is all anyone else can understand. We humans all have a shared and unambiguous frame of reference for what 1 hour is, or what 1 day is. That isn't true of any other unit of software estimation. It doesn't matter that what one engineer can accomplish in 1 hour or 1 day is different from the next. The same is true no matter what you're measuring in. You can still use buffers with time. If you insist on not thinking of your labor in terms of hours spent, you can map time ranges to eg. points along the Fibonacci sequence. That is still a useful way to estimate because it is certainly true as software complexity goes up, the time spent on it will be growing non-linearly.
I second this. If you don't close the loop, if you don't keep track of what you estimated and how long it took, how are your estimates going to get better? They aren't.
The more I work in engineering, the more I agree with pieces like this which suggest that a large part of the job is managing politics in your workspace.
I think the main problem in estimating projects is unknown unknowns.
I find that the best approach to solving that is taking a “tracer-bullet” approach. You make an initial end-to-end PoC that explores all the tricky bits of your project.
Making estimates then becomes quite a bit more tractable (though still has its limits and uncertainty, of course). Conversations about where to cut scope will also be easier.
I find that ballpark estimates are often more accurate than estimates based on work breakdowns ... and this concurs with OP's observation that estimates tend to miss due to the unknowns.
Choose 2. For example a large feature set can be made quickly, but it will be of poor quality.
Note that cost is somewhat orthogonal, throwing money at a problem does not necessarily improve the tradeoff, indeed sometimes it can make things worse.
Work hours is the only way I've learned to think about it productively.
It's also important to gather consensus among the team and understand if/why work hour estimates differ between individuals on the same body of work or tasks. I'd go so far as to say that a majority of project planning, scoping, and derisking can be figured out during an honest discussion about work hour estimates.
Story points are too open to interpretation and have no meaningful grounding besides the latent work hours that need to go into them.
If you have complex tasks and you have more than one person put in time to do a proper estimate, yes, you should sync up and see if you have different opinions or unclear issues.
This resonated with me a lot, thank you. It more or less matches what I have experienced, and it’s good to see someone write this down in a fairly balanced point of view.
My favourite parts:
> My job is to figure out the set of software approaches that match that estimate. […]
> Many engineers find this approach distasteful. […]
> If you refuse to estimate, you’re forcing someone less technical to estimate for you.
Even after many years, I still find it distasteful sometimes but I have to remind myself what everyone gets paid for at the end of the day.
When I started in the early 90s, a wise old programmer gave me two pieces of advice about estimation.
1. When you consider planning, testing, documentation, etc. it takes 4 hours to change a single line of code.
2. To make good estimates, study the problem carefully, allow for every possibility, and make the estimate in great detail. Then take that number and multiply by 2. Then double that number.
10 lines of working and tested code per day has always been considered the realistic maximum, in my experience. Anything else is pure optimism - which might of course work for the project in the short term.
I don’t do a ton of estimation but an interesting new thing is asking a cli agent to estimate for you.
First impressions with this is they give really long estimates.
Also, due to coding agents, you can have them completely implement several different approaches and find a lot of unknown unknowns up front.
I was building a mobile app and couldn’t figure out whether I wanted to do two native apps or one RN/Expo app. I had two different agents do each one fully vibe coded and then tell me all the issues they hit (specific to my app, not general differences). Helped a ton.
It's a next-word-prediction-machine, not a calculator. It's not aware of the passage of time, or how long things take, and doesn't reason about anything. It's just very good at putting words together in combinations that look like answers to your inputs.
That's really useful for some tasks, like regurgitating code to perform a specific function, but it's basically useless for jobs like estimating schedules.
After owning a product, I've developed a lot of sympathy for the people outside of engineering who have to put up with us. Engineers love to push back on estimates, believing that "when it's done" is somehow acceptable for the rest of the business to function. In a functioning org, there are lot of professionals depending on correct estimation to do their job.
For us, an accurate delivery date on a 6 month project was mandatory. CX needed it so they could start onboarding high priority customers. Marketing needed it so they could plan advertising collateral and make promises at conventions. Product needed it to understand what the Q3 roadmap should contain. Sales needed it to close deals. I was fortunate to work in a business where I respected the heads of these departments, which believe it or not, should be the norm.
The challenge wasn't estimation - it's quite doable to break a large project down into a series of sprints (basically a sprint / waterfall hybrid). Delays usually came from unexpected sources, like reacting to a must have interruption or critical bugs. Those you cannot estimate for, but you can collaborate on a solution. Trim features, push date, bring in extra help, or crunch. Whatever the decision, making sure to work with the other departments as colaborators was always beneficial.
I used to work in the semiconductor industry writing internal tools for the company. Hardware very rarely missed a deadline and software was run the same way.
Things rarely went to plan, but as soon as any blip occured, there'd be plans to trim scope, crunch more, or push the date with many months of notice.
Then I joined my first web SaaS startup and I think we didn't hit a single deadline in the entire time I worked there. Everyone thought that was fine and normal. Interestingly enough, I'm not convinced that's why we failed, but it was a huge culture shock.
You're saying it would be convenient for you to know the future. It would also be convenient for me. That said, if you haven't done very similar work in the past, it's very unlikely you'll know exactly how much time it will take.
In practice developers have to "handle" the people requesting hard deadlines. Introduce padding into the estimate to account for the unexpected. Be very specific against milestones to avoid expectation of the impossible. Communicate missed milestones proactively, and there will be missed milestones. You're given a date to feel safe. And sometimes you'll cause unnecessary crunch in order for a deadline you fought for to be met. Other times, you'll need to negotiate what to drop.
But an accurate breakdown of a project amounts to executing that project. Everything else is approximation and prone to error.
Here's my (somewhat tongue-in-cheek) rubric:
- If it's an internal project (like migrating from one vendor to another, with no user impact) then it takes as long as I can convince my boss it is reasonable to take.
- If it's a project with user impact (like adding a new feature) then it takes as long as the estimated ROI remains positive.
- If it's a project that requires coordination with external parties (like a client or a partner), then the sales team gets to pick the delivery date, and the engineering team gets to lie about what constitutes an MVP to fit that date.
This time around, we asked cursor to estimate giving PRD & codebase. It gave very detailed estimate. Currently in the process of getting it down to what leadership wants (as in the article). AI estimates much better & faster than us. We are bringing it down much faster than AI. Sometimes changing the PRD or prioritizing the flows & cutting down scope of MVP. Honestly AI is a great tool for estimation.
One thing I think is missing is an understanding of why there is such a top-down push for timelines: because saying "we aren't sure when this feature will be delivered" makes sales people look like they don't know what they are talking about. Which.... well.
They would much rather confidently repeat a date that is totally unfounded rubbish which will have to be rolled back later, because then they can blame the engineering team for not delivering to their estimate.
I'm a dev, not a salesperson, but let's be realistic. A company tells you "yeah we're interested in signing at $1M/yr, but we really need this feature, when will you have it by?", to which saying "eh we don't know - it'll be done when it's done" will lead to the company saying "ok well reach out when you have it, we can talk again then" (or just "eh ok then not a good fit sorry bye"), and in the meantime they'll go shopping around and may end up signing with someone else.
Having a promised date lets you keep the opportunity going and in some cases can even let you sign them there and then - you sign them under the condition that feature X will be in the app by date Y. That's waaaay better for business, even if it's tougher for engineers.
“Sign up and pay at least part of it now and we’ll prioritize the feature”.
I’ve seen enough instances of work being done for a specific customer that doesn’t then result in the customer signing up (or - once they see they can postpone signing the big contract by continuing to ask for “just one more crucial feature”, they continue to do so) to ever fall for this again.
Just to consider the opposite viewpoint, I sometimes wonder if it's not better that they do churn in that case. Assuming the sales team is doing their job properly, there are other prospects who may not need that feature, and not ramming the feature in under time constraints will lead to a much better product. Eventually, their feature will be built, and it will have taken the time that it needed, so they'll probably churn back anyway, because the product from the vendor they did get to ram their feature in is probably not very good.
Unless its the first time they are hearing about it, when a customer asks about a feature, sales should've done their homework and checked with the team doing the work to get a rough estimate instead of pulling a number out of their behinds.
If you hired someone to do some work on your house, and they refused to give an estimate, would you be happy?
If you had a deadline - say thanksgiving or something - and you asked “will the work be done by then” and the answer was “I’m not going to tell you” would you hire the person?
The no estimates movement has been incredibly damaging for Software Engineering.
Painting a wall has no “if then else”. You dont need to test to see if the wall has been painted.
I guess a fair analogy would be if the home owner just said “Make my home great and easy to use” by Thanksgiving without too many details, and between now ans thanksgiving refines this vision continuously, like literally changing the color choice half way or after fully painting a wall… then its really hard to commit.
If a home owner has a very specific list of things with no on the job adjustments, then usually you can estimate(most home contract work)
All software requests are somewhere in between former and latter, most often leaning towards the former scenario.
1 reply →
If work on a house was specified like a typical software project, no builder would even return your call.
"I'd like to have my roof reshingled, but with glass tiles and it should be in the basement, and once you are half way I'll change my mind on everything and btw, I'm replacing your crew every three days".
1 reply →
Most businesses like to pretend change orders don't apply to software.
1 reply →
For any slightly complicated project on a house the estimate assumes everything goes right, which everyone knows it probably won't. It's just a starting point, not a commitment.
1 reply →
The top down push for timelines is because:
In Australia, an SDE + overhead costs say $1500 / work day, so 4 engineers for a month is about $100k. The money has to be allocated from budgets and planned for etc. Dev effort affects the financial viability and competitiveness of projects.
I feel like many employees have a kind of blind spot around this? Like for most other situations, money is a thing to be thought about and carefully accounted for, BUT in the specific case where it's their own days of effort, those don't feel like money.
Also, the software itself presumably has some impact or outcome and quite often dates can matter for that. Especially if there are external commitments.
The only approach that genuinely works for software development is to treat it as a "bet". There are never any guarantees in software development.
1. Think about what product/system you want built.
2. Think about how much you're willing to invest to get it (time and money).
3. Cap your time and money spend based on (2).
4. Let the team start building and demo progress regularly to get a sense of whether they'll actually be able to deliver a good enough version of (1) within time/budget.
If it's not going well, kill the project (there needs to be some provision in the contract/agreement/etc. for this). If it's going well, keep it going.
Doesn't this ignore the glaring difference between a plumbing task and a software task? That is, level of uncertainty and specification. I'm sure there are some, but I can't think of any ambiguous plumbing requirements on the level of what is typical from the median software shop.
1 reply →
I think this is unfair to sales.
I've made your argument before, but realistically, much of the word revolves around timelines and it's unreasonable to expect otherwise.
When will you recover from your injury so you can play the world cup?
When will this product arrive that I need for my child's birthday?
When will my car be repaired, that I need for a trip?
How soon before our competitors can we deliver this feature?
"It'll be done when it's done" is very unsatisfying in a lot or situations, if not downright unacceptable.
But it's the reality of engineering. If reality is unacceptable, that's not reality's problem.
But the problem is, the sales world has its own reality. The reality there is that "we don't know when" really is unacceptable, and "unacceptable" takes the form of lost sales and lost money.
So we have these two realities that do not fit well together. How do we make them fit? In almost every company I've been in, the answer is, badly.
The only way estimates can be real is if the company has done enough things that are like the work in question. Then you can make realistic (rough) estimates of unknown work. But even then, if you assign work that we know how to do to a team that doesn't know how to do it, your estimates are bogus.
Anyone from a sales roll care to speak to this?
Sales gets fired (or not paid) for missing their estimates (quotas, forecasts) and often have little empathy for engineering being unable to estimate accurately.
The most important part of the article is ”I gather as much political context as possible before I even look at the code.”
Exactly. The principle to go by for estimates is finding a balance between time/scope/cost, and figuring out which aspects of the context affect which dimension is the first step.
The article resonates with me, but I think it misses some nuance when estimating in aggregate. My career has been primarily product engineering, let’s call it the top 2/3rds of the stack. I’ve been lead on multiple projects with the rough shape “we have this [conference/marketing event/partnership] on [date] and we want to ship [new feature].”
My job is to present an opinionated menu of functionality I believe we can have in place by that time. It should also include the potential scope cuts if things go wrong.
I agree that unknowns dominate the individual tasks, but when you bundle enough together the positive/negative surprises tend to balance out.
Without an idea of at least relative Eng costs of different features, it is impossible for leadership to correctly prioritize what we build and what we cut.
I cannot tell you with certainty that a given task will take N days, but I should be reasonably confident communicating what my team can accomplish in a quarter.
This is a great insight and something every engineer should reflect on in the context of their own orgs:
> estimates are not by or for engineering teams.
It's surprising the nuance and variety of how management decisions are made in different orgs, a lot depends on personalities, power dynamics and business conditions that the average engineer has almost no exposure to.
When you're asked for an estimate, you've got to understand who's asking and why. It got to the point in an org I worked for once that the VP had to explicitly put a moratorium on engineers giving estimates because those estimates were being taken by non-technical stakeholders of various stripes and put into decks where they were remixed and rehashed and used as fodder for resourcing tradeoff discussions at the VP and executive level in such a way as to be completely nonsensical and useless. Of course these tradeoff discussions were important, but the way to have them was not to go to some hapless engineer, pull an overly precise estimate based on a bunch of tacit assumptions that would never bear out in reality, and then hoist that information up 4 levels of management to be shown to leadership with a completely different set of assumptions and context. Garbage in, garbage out.
These days I think of engineering level of effort as something that is encapsulated as primarily an internal discussion for engineering. Outwardly the discussion should primarily be about scope and deadlines. Of course deadlines have their own pitfalls and nuance, but there is no better reality check for every stakeholder—a deadline is an unambiguous constraint that is hard to misinterpret. Sometimes engineers complain about arbitrary deadlines, and there are legitimate complaints if they are passed down without any due diligence or at least a credible gut check from competent folks, but on balance I think a deadline helps engineering more than it hurts as it allows us to demand product decisions, designs, and other dependencies land in a timely fashion. It also prevents over-engineering and second system syndrome, which is just as dangerous a form of scope creep as anything product managers cook up when the time horizon is long and there is no sense of urgency to ship.
IMHO time estimation for software development is a legacy way of thinking. A result of industrial processes.
At my team we think in terms of deliverables and commitments: "I can commit til deliver this by that date under these circumstances".
This mitigated the diverse nature Og thinking.
Software time estimations are always going to be bad, you might as well ask an LLM.
The old guys in the 80's and 90's would say kiddingly multiply your original estimate time pi (3.14).
Several times, to be sure
I agree with most of things on this article with and additional caveat: estimates are also a function of who is going to do the work. If I have a team of 5 offshore devs who need hand holding, 2 seniors who are very skilled, and two mid level or juniors, how long something will take, what directions will be given, and even the best approach to choose can vary wildly depending on which subset of the team is going to be working on it. On top of all the other problems with estimates. This variance has degrees, but particularly when there are high-skilled on shore engineers and low skilled offshore ones, it leads to problems, and companies will begin to make it worse as they get more cost sensitive without understanding that the different groups of engineers aren't perfectly fungible.
And how many other parallel work streams are going. So many times I’ve estimated something to be “5” and it’s gone into my queue. Then people are wondering why it’s not done after “5” estimation units have passed and I’ve got “10” points worth of more high priority tasks and fires at every moment of my career
Excellent example why anything else than work hours is pointless to estimate in.
Estimation is an art, not a science. It's always going to be a judgement call by the engineers tasked with giving them to management. Taking all of the factors from this article and beyond can and should go into making that judgement call.
I always tell my teams just skip the middlemen and think of estimates as time from the jump. It's just easier that way. As soon as an estimate leaves an engineer's mouth, it is eagerly translated into time by everyone else at the business. That is all anyone else cares about. Better said - that is all anyone else can understand. We humans all have a shared and unambiguous frame of reference for what 1 hour is, or what 1 day is. That isn't true of any other unit of software estimation. It doesn't matter that what one engineer can accomplish in 1 hour or 1 day is different from the next. The same is true no matter what you're measuring in. You can still use buffers with time. If you insist on not thinking of your labor in terms of hours spent, you can map time ranges to eg. points along the Fibonacci sequence. That is still a useful way to estimate because it is certainly true as software complexity goes up, the time spent on it will be growing non-linearly.
You can improve if you follow up the estimates. My team had several months when we were within +- 10% in the aggregate.
I second this. If you don't close the loop, if you don't keep track of what you estimated and how long it took, how are your estimates going to get better? They aren't.
Is that a problem? Well, how good are they now?
The more I work in engineering, the more I agree with pieces like this which suggest that a large part of the job is managing politics in your workspace.
I think the main problem in estimating projects is unknown unknowns.
I find that the best approach to solving that is taking a “tracer-bullet” approach. You make an initial end-to-end PoC that explores all the tricky bits of your project.
Making estimates then becomes quite a bit more tractable (though still has its limits and uncertainty, of course). Conversations about where to cut scope will also be easier.
But how long it'll take you to make that PoC? Any idea? :P
I find that ballpark estimates are often more accurate than estimates based on work breakdowns ... and this concurs with OP's observation that estimates tend to miss due to the unknowns.
Features : Quality : Timeline
Choose 2. For example a large feature set can be made quickly, but it will be of poor quality.
Note that cost is somewhat orthogonal, throwing money at a problem does not necessarily improve the tradeoff, indeed sometimes it can make things worse.
Agree. I feel people with less clarity about priorities of features waste too much time by asking accurate estimates
Slightly OT, but anyway.
The only reasonable way to estimate something is in work hours. Everything else is severely misguided.
Also, if you don't follow up any estimate is meaningless.
Work hours is the only way I've learned to think about it productively.
It's also important to gather consensus among the team and understand if/why work hour estimates differ between individuals on the same body of work or tasks. I'd go so far as to say that a majority of project planning, scoping, and derisking can be figured out during an honest discussion about work hour estimates.
Story points are too open to interpretation and have no meaningful grounding besides the latent work hours that need to go into them.
If you have complex tasks and you have more than one person put in time to do a proper estimate, yes, you should sync up and see if you have different opinions or unclear issues.
> I ask myself "which approaches could be done in one week?".
This is exactly how all good art is done. There's an old French saying, une toile exige un mur.
This resonated with me a lot, thank you. It more or less matches what I have experienced, and it’s good to see someone write this down in a fairly balanced point of view.
My favourite parts:
> My job is to figure out the set of software approaches that match that estimate. […]
> Many engineers find this approach distasteful. […]
> If you refuse to estimate, you’re forcing someone less technical to estimate for you.
Even after many years, I still find it distasteful sometimes but I have to remind myself what everyone gets paid for at the end of the day.
You wouldn’t put up with this drama from any other professional, I don’t know why I’d take it from a SWE.
Timelines can be estimated approximately.
I’ve never had a construction project finish exactly on time, but that doesn’t mean estimates are unwise.
Bravo! Not a single mention of LLMs changing the calculus.
In some situations it may be politically useful to pretend that an LLM makes things faster because that is what your boss wants to hear though.
When I started in the early 90s, a wise old programmer gave me two pieces of advice about estimation.
1. When you consider planning, testing, documentation, etc. it takes 4 hours to change a single line of code.
2. To make good estimates, study the problem carefully, allow for every possibility, and make the estimate in great detail. Then take that number and multiply by 2. Then double that number.
10 lines of working and tested code per day has always been considered the realistic maximum, in my experience. Anything else is pure optimism - which might of course work for the project in the short term.
I used to (half) jokingly tell people to go to the next human unit.
A few days? At least a week.
A week? A month.
A month? A year.
A year? Uh... decade or never...
It's wildly pessimistic but not as inaccurate as I'd like.
[dead]
I don’t do a ton of estimation but an interesting new thing is asking a cli agent to estimate for you.
First impressions with this is they give really long estimates.
Also, due to coding agents, you can have them completely implement several different approaches and find a lot of unknown unknowns up front.
I was building a mobile app and couldn’t figure out whether I wanted to do two native apps or one RN/Expo app. I had two different agents do each one fully vibe coded and then tell me all the issues they hit (specific to my app, not general differences). Helped a ton.
I think Claude’s estimates are biased towards huge enterprise projects.
I asked it to estimate a timeline for a feature in my hobby project and it confidently replied, “4.5 weeks to code completion”.
Less than 4 hours later, the feature was done. I asked it to compare this against its initial estimate and it replied, “Right on schedule!”
I have completely given up on using it to estimate anything that actually matters.
It's a next-word-prediction-machine, not a calculator. It's not aware of the passage of time, or how long things take, and doesn't reason about anything. It's just very good at putting words together in combinations that look like answers to your inputs.
That's really useful for some tasks, like regurgitating code to perform a specific function, but it's basically useless for jobs like estimating schedules.