If you've experienced these problems:

  • Not able to consistently hit committed dates
  • Constant pull and tug between delivery teams and product management reg. quick turn around on estimates and the amount of detailed information required for the estimate
  • Inconsistent approach in how estimates are done. Some teams use story points/velocity and others used bottom up estimation
  • Urgent estimates needed forcing teams to just guess

then forecasting might be for you. We had all these problems in my group at the New York Times and it showed. We tried different solutions using estimates, velocity and burn down charts but ultimately it didn't work. Then we flipped it on it's head and started forecasting delivery instead of estimating it. That made all the difference! We now have a shared way of communicating, spotting problems and addressing them before they materialize.

I walk through multiple examples of how we successfully used forecasting at the New York times during a big migration from data centers to the cloud to hit a fixed delivery date. I show how we use forecasting in a portfolio setting where multiple teams need to deliver against a target date and deliverable.

You leave this session knowing what forecasting is and how you can apply it in your environment.

 
 

Outline/Structure of the Case Study

  • Set the context and stage for the New York Times environment
  • Explain the difference between estimation and forecasting
    • The primary difference between estimation and forecasting is estimation relies more on intuition where forecasting relies more on measurement. With estimation almost every element involves guess work: scope, effort, risk, accounting for other work, etc. With forecasting the guesswork is limited to scope and we use measurement of past initiatives to calculate the how long it will take to complete that scope. The biggest difference is when teams estimate they are basing their estimations on effort to do a given piece of work. What is often ignored or at least is very difficult to account for is all the other dependencies that contribute to the delay in delivering work. By using measurement to forecast dates those inherent delays are baked into the historical data thus freeing teams up from having to account for every single variable.
  • Show how you can forecast practically using simple tools like Excel
  • Explain how we used this for a big migration from data centers to the cloud
    • In fact we did a major migration of our data centers to AWS and GCP last year and we used this forecasting approach. The beauty is that the process is basically the same regardless of the type of initiative we are undertaking. The difference will be in the data used for measurement. The data needs to be representative of the team(s) doing the work and the type of work they are doing. For instance we had development teams doing their individual migrations. We couldn’t use the data we had for their regular development work to forecast the work related to the cloud migration because it is a very different type of work. For that we had actually do some of the work in order to accumulate data that we could use to generate forecasts.
  • Dealing with scope and dates
    • For fixed dates scope and resources will flex and vise versa for fixed scope. The process allows us to see early on if requested initiatives are feasible and provides a mechanism to have objective discussion with the business about trade-offs
  • Handling scope growth or how to incorporate learnings as you go
    • Teams always plan at 70% capacity in order to provide for unknowns that invariably come along during implementation
    • When identifying the story counts we use ranges rather than fixed numbers
    • Our forecasting spreadsheets have a split-rate that can be set to account for stories splitting. This is what allows us to account for stories being off different sizes
    • If the business wants to add additional epics to the scope then we treat it like any other epic where the team will determine the story count range. The impact to the dates is determined based on where the epic fits within the priority of the existing epics in the initiative. We then have a discussion with the business to determine if the the impact is acceptable, they want to cut scope in other areas, or forego the additional scope altogether.
  • Show how to use forecasting in portfolios where multiple teams have deliverables that make up the final delivery. Show how we do this today delivering value across 13 teams and how that brings value.
    • We are able to quickly provide the business an idea of the amount of time given initiatives will take within a given level of probability. This allows them to determine the amount of schedule risk they want to take on without going through a full blown specification process
    • The tracking of initiatives is fully visible to all participants across the entire portfolio
    • Weekly updates to the forecast allow us to see if we are off track much sooner where have more options available to make necessary changes
    • Since the teams are the ones doing the forecasts it helps reinforce ownership of the work and keeping them focused on the goals
    • We have a bird’s eye view of the work across the entire portfolio and can communicate our progress to the business at any time
    • We can quickly assess the impact of requested priority changes on existing plans so the business can make informed decisions when new opportunities arise
  • Conclusion with suggested next steps and practical advice

The specific examples and solutions we'll go through are:

  • First attempt was to develop an automated forecast using Google Sheets and Jira
    • Approach
      • Forecast was velocity based using the existing story backlog
      • Used Jira as the data source for velocity calculation and upcoming priorities
      • Google Sheets was used for all data retrieval, business logic, and reporting
    • Pros
      • Fully automated and easy to use
      • Could be run at any time to provide an up to date forecast
      • Provided a consistent approach across the entire portfolio
    • Cons
      • Required that all stories for a given initiative were created in Jira and priorities. This forced a lot of up front planning
      • It did not account for any internal or external dependencies
      • It was limited to single team initiatives. Most large initiatives involved multiple teams
      • There was very little relationship between velocity and delivery time
      • Forecasts were often a black box for teams and made it difficult for them to explain changes
      • Jira has some wonky behavior when it comes to how issues are prioritized
  • Second attempt was to develop a semi-automated approach using Google Sheets, Jira, and Excel
    • Approach
      • Throughput used came from automated metric reports that ran nightly
      • Jira was the data source for metric reports
      • A summary level Google Sheet was used to aggregate individual team forecasts for cross-team initiatives
      • Forecasts were updated weekly and discussed as a group
      • The process handles cross-team initiatives as well as internal and external dependencies
      • Weekly updates were captured so we had a running history of the forecast progression
      • Since there was some manual work involved teams have a much better understanding of their forecasts and can explain the impact of changes
      • We have a single location for all the portfolio forecasts
      • Forecasting process has some opportunity for human error since there is manual work involved
      • There is more maintenance involved in adding new initiatives to the the Google summary sheet
      • The forecasting process requires more knowledge of the underlying throughput data in order to do the team forecasts properly
      • We forecast at the epic level so we require that the epics are identified up front rather than the stories. For each epic the teams determine a range for the story counts. They determine the story count range by using past epics as a gauge similar to relative story sizing. Team’s use a heuristic of 90% confident that the number of actual stories will fall within their determined range. If they don’t feel the initial count meets that heuristic they expand the range until it does.

      • Forecast was based on throughput and epic count

  • Pros
    • The process handles cross-team initiatives as well as internal and external dependencies
    • Weekly updates were captured so we had a running history of the forecast progression
    • Since there was some manual work involved teams have a much better understanding of their forecasts and can explain the impact of changes
    • We have a single location for all the portfolio forecasts
  • Cons
    • Forecasting process has some opportunity for human error since there is manual work involved
    • There is more maintenance involved in adding new initiatives to the the Google summary sheet
    • The forecasting process requires more knowledge of the underlying throughput data in order to do the team forecasts properly

Learning Outcome

Understand what forecasting is and how it differs to estimation.

Know how to forecast the work of a single team

Know how to forecast for a portfolio of teams

Know where to go for more information and next steps

Target Audience

Scrum Masters, PMO, Project Managers, anyone that wants to figure out when can we deliver

Prerequisites for Attendees

Basic knowledge of agile and principles. Understanding of how to estimate, what velocity is, burn down charts, etc.

schedule Submitted 1 year ago

Public Feedback

comment Suggest improvements to the Speaker
  • George Dinwiddie
    By George Dinwiddie  ~  1 year ago
    reply Reply

    Kristian,

    In your second attempt, doesn't it still require listing all the stories up front?

    What needs are being met by these forecasts? Who committed to the dates? Was the scope for these commitments fixed? Or, instead, was the general intent of the scope communicated? Was there any scope growth during implementation?

    The outline/structure is the place where you sell your session to the reviewers. Help them recognize that you'll deliver on your abstract. Give them details about the content and the way that you'll present it to convince them that you'll do a good job.

    See also https://threadreaderapp.com/thread/1028714041349263360.html for an independent description of submitting a successful proposal.

    - George, AgileDC Program Chair