(Not) Adjusting Effort Estimations: A Pragmatic Perspective
A common challenge that leaders face is whether to adjust the effort estimation of a backlog item based on the actual time it was completed. While the reflexive response may be a resounding ‘yes’, I typically recommend avoiding changing backlog item effort estimations after the fact, except in extraordinary circumstances. Let’s approach this dilemma logically, keeping the end goal in mind.
When we think about why we should adjust estimations, we typically have two objectives in mind. First, we seek to enhance our sprint planning for the future. Second, we aim to understand how much time was invested in a backlog item. Let’s look into these two aspects and question them.
Adjusting Estimations to Improve Sprint Planning
Oftentimes, our hidden assumption is that accurate estimations for backlog items are the base ground of effective sprint planning. We need to challenge this assumption by asserting that what we truly require for planning a sprint’s content is an understanding of the team’s velocity per sprint and the effort needed for the content being planned. Precise estimations for individual backlog items may not be a prerequisite. In the domain of software engineering, our estimations are inherently imprecise, given the presence of unknown unknowns, as explained by the complex domain of the Cynefin framework. Therefore, expecting pinpoint accuracy in estimations is unrealistic. We must acknowledge that a variance will persist between our estimations and the actual effort invested. Once we embrace this notion, it becomes clear that adjusting estimations after the fact is counterproductive, as the same variance will persist in the next sprint’s planning.
Consider the following example for clarification (though presented in working days, it applies equally to story points):
- We have a team of 5 people.
- There are 10 backlog items in the overall backlog.
- The team initially estimated each backlog item as taking 2 working days.
- The team’s velocity, based on the last 3–4 sprints, is 20 working days per sprint.
- Consequently, the team schedules all backlog items for the sprint.
- By the end of the sprint, the team realizes that they spent 3 working days on 5 of the backlog items instead of the originally estimated 2 working days.
If we adjust the estimation for those backlog items, the team’s velocity would be recalibrated to 25. However, this implies that the team can now estimate precisely without any deviation, a paradoxical expectation in an environment filled with unknown unknowns. The outcome would likely be that the team overloads the next sprint, pulling in 25 working days when they can only deliver 20. This outcome arises from the team’s recurring pattern of selecting backlog items initially estimated as 2 working days, only to realize at the sprint’s conclusion that they required 3 working days for completion.
One approach to tackle this issue is to explore the option of relative sizing, where you evaluate comparable backlog items from previous experiences and base your estimates on that benchmark. While this approach provides a pragmatic means to address uncertainty, it should be noted that it, too, introduces some level of variance.
In light of this, I conclude that recalibrating effort estimations for planning purposes is an unuseful practice. It not only adds unnecessary overhead (in updating backlog items) but also disrupts the planning of future sprints.
Please note that there is an exception: if the effort estimation for a completed backlog item differs significantly from the original estimation, in such cases, I do recommend updating the effort estimation.
Adjusting Estimations to Monitor Work Effort
Organizations sometimes like to monitor how much is spent on each backlog item. For those using story points as an effort measurement, adjusting the story point values to gain insights into time spent is irrelevant because story points do not reflect time (we will discuss this further). For teams employing time estimations as their effort measure, this question holds more relevance.
One of the big decisions we need to make is decide whether we are measuring work or delivery. I advocate for measuring delivery over measuring work (although that’s a separate discussion). Assuming we agree with the approach of measuring delivery, the specific time spent on a single backlog item becomes less significant. What truly matters is whether our overall team’s delivery velocity is on an increasing trend. Our primary focus should be on the trend, specifically the delivery trend, rather than being obsessed with exact delivery numbers. As long as we aim for a consistent upward trend, we are moving in the right direction. It’s important to note that precise estimations are not a prerequisite for recognizing this trend, as we’ve already acknowledged that variability in estimation is an inherent aspect of the software engineering domain.
Conclusion
We’ve questioned the conventional wisdom that mandates precise estimations as a prerequisite for planning and pushing for improved team performance. We’ve also demonstrated that adjusting estimations might damage our future planning, given that estimation deviation is inherent in complex domains where unknown-unknowns play a prominent role. Hence, we should avoid expending unnecessary energy and resources on enforcing modifications to the effort estimations of completed backlog items.