Estimating how much a project would cost and how long it would take has always been a challenge.  These are critical business questions, and not answering them is not an option.  Estimating large-scale projects is even more difficult and complicated not only because of their large scale and distributed nature, but also due to faulty estimation methods widely used today. Large-scale agile projects consist of several teams (organized into programs and portfolios).  Teams are often distributed. If you are doing story point estimations and generating reports for large-scale agile projects in blissful ignorance of the fact that story point scales used by different teams may not be the same, you will make wrong decisions caused by wrong estimates and metrics.  

All Agile Lifecycle Management tools expect and assume that the story points entered by you in the tool are "normalized" across teams, i.e., they follow the same scale.   Story points entered into the tool without normalization (garbage-in) will generate meaningless reports and metrics (garbage-out).  

You may also be hard pressed to estimate portfolios and programs when their stories are not even defined.  This is like estimating something that is unknown!

I will present solutions to these and other estimation challenges for large-scale agile projects.

I will present the Calibrated Normalization Method (CNM) for scalable estimation, which I have developed and applied in my client engagements since 2010.   CNM promotes local, decentralized, and autonomous decision making at the team level by allowing teams to use their own story point scales, and normalizing team story points with a novel technique.   I will also contrast and compare CNM with centralized methods and the SAFe method for estimation. 

I will demonstrate the use of a normalization calculator for doing normalization math needed for bottom-up as well as top-down estimations in large-scale projects.  This calculator has been developed and refined with actual usage; it makes story point normalization calculations very quick and easy, and avoids human errors.  

 
16 favorite thumb_down thumb_up 4 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/structure of the Session

I will first present the challenges of developing estimation and metrics for large-scale projects using a concrete example of a large-scale projects with 2 programs and a total of 8 agile teams.

I will explain the need for story point normalization to be able to properly estimate large-scale agile projects.

I will then illustrate centralized, semi-distributed and fully-distributed estimation methods using the concrete example of the large-scale project of 2 programs and 8 agile teams.

I will finally demonstrate the use of a simple normalization calculator.

Learning Outcome

Understand the trade-offs and advantages of different large-scale estimation methods: centralized, semi-distributed, and fully distributed methods, and be able to choose an estimation method appropriate to your situation.    

Understand how to use CNM for both bottom-up (from teams to programs up to portfolios) and top-down (from portfolio down to programs and down to teams) estimation for both fix time/flex scope and fix scope/flex time agile projects.   

Experience scalable estimation methods through an example of a large-scale project with 2 programs and 8 agile teams.

Understand the use of story point normalization calculator.  

This normalization calculator will be provided to all attendees of the session.

Target Audience

ScrumMasters, Project Managers, Program Managers, Product Managers, Portfolio Managers, PMOs, business managers, Team Leads

schedule Submitted 3 years ago

Comments Subscribe to Comments

comment Comment on this Proposal
  • Ram Srinivasan
    By Ram Srinivasan  ~  3 years ago
    reply Reply

    Hi Satish,

     

    I am very intrigued by your proposal. It would be nice if you could elaborate on a few points.

     

    1. Story point is a distribution - has a mean and a standard deviation - http://www.mountaingoatsoftware.com/blog/how-do-story-points-relate-to-hours.  That is, 1 team story point does not mean X hours of effort. Also by abstracting the estimates using story points, we account for risk (higher risk = higher story point), complexity, effort (hours) and who actually works on it.  How does CNM account for this?

     

    2. Based on team's familiarity with the domain and technology, story point estimates will not change, but ideal hours/days estimates would change (Agile Estimating and Planning by Mike Cohn- Chapter 8). So the initial estimate (and hence the SPC and NSP) will be different than if you would do it after a couple of sprints. This defeats the whole purpose of projecting NSPs at the program level.  May be I am missing something, can you please help me understand it better?

     

    Thanks,

    Ram

    • Satish Thatte
      By Satish Thatte  ~  3 years ago
      reply Reply

      Thanks Ram for your comments and questions.  In fact, these are common questions I often get when I teach about the need for normalization of story points in large-scale agile projects, and explain different methods for normalization (which are mentioned in my proposal abstract to AgileDC 2014).   Your issues as well as many others are explained in great detail in my 5-part blog series (Scalable Agile Estimation and Normalization of Story Points) which is available at: http://bit.ly/1cPgYJQ.   I have written a technical report on this blog series, which I will be glad to email it you. 

      Meanwhile, please see my response to your two comments/questions below.

      >> 1. Story point is a distribution - has a mean and a standard deviation - http://www.mountaingoatsoftware.com/blog/how-do-story-points-relate-to-hours.  That is, 1 team story point does not mean X hours of effort. Also by abstracting the estimates using story points, we account for risk (higher risk = higher story point), complexity, effort (hours) and who actually works on it.  How does CNM account for this?

      As story points of effort are estimated by different teams in a program or a portfolio, it does not make sense to simply add or roll them up without first ensuring that the story points across teams follow the same scale, i.e., one story point across teams indicate roughly the same amount of effort.   This is the essence of story point normalization problem: how do we take the story point concept developed at the team level, and make it applicable at a wider scale of multiple teams, programs and portfolios?  

      Centralized methods for normalization (as explained by Mike Cohn and Larman & Vodde in their own books) require all teams to come together and do estimation exercises together for a while to agree upon a set of benchmark stories (stories representing 1, 2, 3, etc., story points).   This often becomes impractical when teams are working on different application domains or are geographically distributed (a common case). 

      SAFe’s normalization method does establish equivalence between 1 story point to 1 ideal day or 8 ideal hours.  Each team needs to identify its own 1 story point of 1 ideal day.   This creates its own challenges: What would happen if a team finds a 1 story point story, but it is going to take 14 ideal hours of effort, and some other team’s 1 story point story will take 17 ideal hours, etc.  This is quite common and bound to happen.   I am an SPC (SAFe Program Consultant) and I have spoken with SAFe SPC instructors about the need to come up with a general solution that does not force each team to come up with its 1 ideal day story.  

      CNM solves this problem in full generality without forcing you to use SAFe – CNM can be used with SAFe, but is connected with or bound to SAFe.  CNM requires each team to estimates its story points called Team Story Points (TSPs) as teams do today (no change); CNM then asks each team to calibrate what 1 TSP means for it in ideal hours for the sprint it is planning.    This calibration of 1 TSP does take into account factors such as the complexity, risk, effort, and who will work on stories, and it is done by the entire team.  It is based on taking a sample of 3 to 5 stories in a sprint backlog.     The entire organization also agrees up front on the organization-wide Normalization Basis, which is a decision that can be taken without meetings or deliberations; it is literally a one-minute decision, by simply making a choice for the Normalization Basis, such as 8 hours or 20 or 40, etc.  The choice doesn’t matter as long it is adhered to by the entire organization.    

      As an example, for a team in a program, if its calibrated 1 TSP is 14 ideal hours, and the Normalization Basis for the organization is 40 hours, its Story Point Conversion Factor (SPCF) = Calibrated Size of 1 TSP / Normalization Basis;   In this example, SPCF 14/40 = 0.35, i.e., 1 TSP = 0.35 Normalized Story Point (NSP).   A story of 3 TSPs for this team will be equivalent to 3*0.35 = 1.05 NSP.   Another team in the program may calibrate its 1 TSP to be 20 ideal hours, and its SPCF will be 20/40 = 0.5, i.e., 1 TSP = 0.5 NSP, and its story of 3 TSPs will be equivalent to 3*0.5 = 1.5 NSPs.  At the program level these two stories of two teams (each of 3 TSPs) will contribute to 1.05 + 1.5 = 2.55 NSPs.  This makes perfect sense as NSP has well-defined semantics for the whole organization, while TSPs have meaning only in the context of a team and a sprint.  You simply cannot add TSPs across teams, as the scales of TSPs across teams are not the same.

      >> 2. Based on team's familiarity with the domain and technology, story point estimates will not change, but ideal hours/days estimates would change (Agile Estimating and Planning by Mike Cohn- Chapter 8). So the initial estimate (and hence the SPC and NSP) will be different than if you would do it after a couple of sprints. This defeats the whole purpose of projecting NSPs at the program level.  May be I am missing something, can you please help me understand it better?

      If the same team members continue sprint by sprint working on the same technology platform and application domain (the so-called “yesterday’s weather” model), the story-point estimates for a story will not change.   If the yesterday’s weather model prevails sprint after sprint, a team may not have a reason to recalibrate its 1 TSP, so its SPCF will stay the same sprint after sprint.  On the other hand, if the yesterday’s weather model is not expected to hold (due to factors such as a change in team composition, change in technology platform or change in application domain), a team should recalibrate its 1 TSP, and its SPCF will change.  Similarly, if a team has a strong reason to believe that after  several sprints under its belt, its productivity has improved with experience and positive well-jelled team effects, it may choose to recalibrate its story point; if 1 TSP was 18 hours, it may now be 16 hours or 15 hours.   A team with higher productivity does have lower SPCF as one would expect.   However, the most common reason for recalibrating TSP is due to break-down in yesterday’s weather model (reorgs, personnel movements, attrition, technology platform changes, application domain change, etc.), and not due to higher productivity (which may happen occasionally if the team is lucky enough to stay well-jelled over several sprints, and continues to remain intact together without any disruptions).

      CNM estimates the scope of work at the portfolio and program levels without knowing lower-level story point details, as those stories are not available yet; we also often do not know the teams that will be assigned to do the work.   For a portfolio, CNM requires identification of a baseline epic, an epic about which the portfolio leadership team has the most knowledge. Similarly, CNM requires identification of a baseline feature in the baseline epic, which is the feature about which the program leadership team has the most knowledge. Finally, CNM requires identification of a baseline story in the baseline feature, which is the story that the program leadership team feels most confident in estimating.

      All we need is a baseline epic (at the portfolio level) and its features, a baseline feature within the baseline epic (at the program level), and stories within the baseline feature and the baseline story.   Think of this as a sample of a much larger work backlog comprising the whole portfolio.  The details of the entire portfolio backlog (all epics, all features, and all stories) are not known at this stage.

      Only the baseline story is estimated in NSPs. Once the baseline story is estimated, then all other stories in the baseline feature are estimated using relative sizing techniques; then the roll-up of those story estimates become the estimate for the baseline feature in NSP. Then other features in the baseline epic can be relative sized against the baseline feature, which then roll up to the baseline epic estimate in NSP. Finally, other epics in the portfolio can be relative sized against the baseline epic, which then roll up to the portfolio estimate in NSP.  

       

      As indicated earlier, for all details:    http://bit.ly/1cPgYJQ or ask for the technical by sending me an email at Satish.Thatte@VersionOne.com

  • George Dinwiddie
    By George Dinwiddie  ~  3 years ago
    reply Reply

    Hi, Satish,

    I'm puzzled by this submission. For large-scale development, you seem to be promoting long-term estimation on story points. But stories are typically estimated just before development, to avoid staleness in the estimate and spending undue effort "up-front" on stories that might change. Am I reading this wrong?

     - George

    • Satish Thatte
      By Satish Thatte  ~  3 years ago
      reply Reply

      George,

      Thank you for your comment.   I am not proposing or promoting long-term estimation based on story points.  In fact, as the abstract for my proposed presentation says “You may also be hard pressed to estimate portfolios and programs when their stories are not even defined.  This is like estimating something that is unknown!

      My Calibrated Normalization Method (CNM) estimates the scope of work at the portfolio and program levels without knowing lower-level story point details, as those stories are not available yet (that story point estimation happens just before the development starts, as you have rightly said).  For a portfolio, CNM requires identification of a baseline epic, an epic about which the portfolio leadership team has the most knowledge. Similarly, CNM requires identification of a baseline feature in the baseline epic, which is the feature about which the program leadership team has the most knowledge. Finally, CNM requires identification of a baseline story in the baseline feature, which is the story that the program leadership team feels most confident in estimating.

      All we need is a baseline epic (at the portfolio level) and its features, a baseline feature within the baseline epic (at the program level), and stories within the baseline feature and the baseline story.   Think of this as a sample of a much larger work backlog comprising the whole portfolio.  The details of the entire portfolio backlog (all epics, all features, and all stories) are not known at this stage.

      Only the baseline story is estimated in normalized story points (NSP). Once the baseline story is estimated, then all other stories in the baseline feature are estimated using relative sizing techniques; then the roll-up of those story estimates become the estimate for the baseline feature in NSP. Then other features in the baseline epic can be relative sized against the baseline feature, which then roll up to the baseline epic estimate in NSP. Finally, other epics in the portfolio can be relative sized against the baseline epic, which then roll up to the portfolio estimate in NSP.    I will describe in detail what normalized story points are, and how they also solve the problem of developing meaningful metric for velocities across teams, programs and portfolios.

      I have applied this method in my client engagements since 2010.  I will present examples.   The details of CNM are presented in detail in my blog series at: http://bit.ly/1cPgYJQ.

      Regards,

      Satish Thatte


  • Liked Matt Badgley
    keyboard_arrow_down

    Matt Badgley - Yes, Words Really Do Mean Things - Establishing a Shared Language

    60 mins
    Workshop
    Intermediate

    During this conference, within the books we read, in our day-to-day lives -- we use words as a means to negotiate, interact, express, and do. Words, whether written or spoken can play differently based on the people that exchange them. In the world we are living today, words are bantered so freely that they cause a war or unite a community or save a marriage or demoralize a team.

    As we see today, the concepts of agile are permeating the enterprise and scaling out from the team to the program management office to the executing chambers. Words are often mis-used, mis-understood, and lead to bad behaviors.

    In this session, we'll discuss the general challenges of communications and the overwhelming vocalbulary that we have embedded in our craniums.  We'll explore words -- in particular, the words we use everyday around software development. We look at how some of the basic words we use like Velocity, Sprint, and Team have clear meanings and plenty of baggage. 

    To help solidify the learning of this workshop, we'll use a couple brainstorming games -- so come prepared to get engaged. We'll wrap-up by using our collective experiences to either find better ways to explain our words or establish brand new ones. Our ultimate goal is to establish a way an organization can establish an ubiquitous language around the work they do and ultimately improve communication which will lead to better agile transformations and hopefully better solutions.

  • Liked Itamar Goldminz
    keyboard_arrow_down

    Itamar Goldminz - Lean Scaling: From Lean Startup to Lean Enterprise

    60 mins
    Talk
    Intermediate

    Congratulations! You've found the right product-market fit, and it's now time to scale your business. But growing your organization often means slower decision making, increased complexity, and higher chance for misalignments. How can you grow your business while staying lean? Learn five key lessons on how to use smart tooling and process to address these complicated growth challenges.

  • Liked Dave Chesebrough
    keyboard_arrow_down

    Dave Chesebrough - Considerations for Agile Adoption at the Team, Project, and Organizational Levels

    60 mins
    Panel
    Advanced

    Change is hard. For any organization, team, or individual, the ability to change is difficult even when the desire for the change exists. Some studies have revealed that even when people know they need to change, even at the risk of their lives, it is still difficult to adopt new practices and behaviors.  Knowing this, what are organizations and project teams doing to make agile adoption easier and how are they supporting the teams and the individual new to this way of developing software products and systems?

    Through a roundtable discussion with representatives from industry and government, we will share with you our experiences with Agile on Federal government projects and programs, the challenges we faced, lessons learned, and different activities we performed as we went through an agile transition. The intent is that our experiences will provide you with ideas that you can take back to your organization and teams to support your agile journey.

    The panelists will share their experiences in bringing agile to their own organizations as well to their government clients.  Topics to be addressed include:

    • What makes adoption easier?
    • Challenges faced and tactics to overcome them.
    • Lessons learned from a broad spectrum of successful, and unsuccessful, adoptions of agile methods in acquisition.

    Moderator:

    Dave Chesebrough, President, Association for Enterprise Information

    Panelists:

    Dr. Suzette Johnson, PMP, CSP, CSC, Certified (Agile) Scrum Coach, NGIS Technical Fellow and Chair of the Northrop Grumman Agile CoP.  Suzzette leads development of agile practices across programs serving government customers, including DoD and Federal Health IT. 

    Robin Yeman, Agile Transition Lead / SME, at Lockheed Martin where she defines Agile Strategy across capability areas at IS&GS; identifies and implements metrics to ensure results of strategy and enable course correction; develops Agile SMEs to support strategic consulting for program start-up, transition for waterfall, release planning, and execution; teaches and educates all levels at LM to allow LMCO to better meet customer needs; certifies large teams in the Scaled Agile Framework; and provides support in developing Performance Measurement Baseline and Agile EVM.

    Jerome (Jerry) Frese, Program Management Analyst at the Internal Revenue Service, is the organizer of an Inter-Agency Seminar whose purpose is to bring federal SDLC practitioners together so they can establish a network, learn about and share best practices and collaborate on new and innovative ways to support projects. Through the series of nine seminars he has worked with 33 other Government agencies fostering the implementation of agile in Federal IT. In his own agency, he brings 40 years of software development experience to his job the Senior Methodologist at the IRS.    

    James Barclay, Senior Systems Engineer, NGA Architecture & Engineering Group National Geospatial-Intelligence Agency.

  • Liked Tom Friend
    keyboard_arrow_down

    Tom Friend - Agile Methods Embedded in the United States Military War fighting Methods.

    60 mins
    Talk
    Advanced

    Agile DC 2014 Track Government

     Agile Methods Embedded in the United States Military War fighting Methods.

     a) Title –

     Agile Methods Embedded in the United States Military War fighting Methods. How OODA & MDMP War Fighting & Maneuver Warfare Stacks up Against Agile Software Development. Reflections of a Crew Dog / Scrum Master

     b) Summary –

    Agile = Military Decision Making Process

    SCRUM = OODA loop Observe Orient Decide Act

    Military Maneuver war theory = Lean principles

     c) Description

     This lecture walks the participants through the crossover points of AGILE SCRUM to Observe Orient Decide Act (OODA), the Military Decision Making Process (MDMP), and the lean principles of Maneuver warfare.

     The lecture provides the Agile practitioner, engaged in Federal DOD Agile organizational transformation, tools and touch points that will resonate with military decision makers. These tools and narratives are bridges to build trust and dialog. They are concrete starting points to engage in relevant conversations that lead to constructive outcomes.


    The application of the content in this lecture is for a focused audience. However; the message is a fantastic way to show how the Agile Scrum processes are used in other areas. For the non Federal Agilest the outline of OODA and MDMP will be quite novel. For example the history of the OODA loop that formed during the birth of dog fighting in the Jet Age was the beginning of iterative refinement that led to what we know as SCRUM today.

     

    Boyd’s OODA Loop Applied Relates human behavior

    Goal: Successful interaction with other loops

    Objective: Get inside the opposing OODA Loop

    Outcome: Destructive: Air Combat, Warfare

    Outcome: Constructive: Agile Software Engineering Process

     When you’re doing OODA “loops” right, accuracy and speed improve together; they don’t trade off. A primary function of Agile “loops” is to build an organization that gets better and better at things.

     Additionally this lecture shows numerous crossover examples of MDMP and Agile in general along with an overview of how Maneuver warfare is an adaptation of Lean principles.

     The end goal is to how that Scrum, Agile, and Lean maps to Military methods. The focus of these process is to quickly develop a flexible, tactically sound, and fully integrated synchronized plans that increases the likelihood of mission success. This is the same within IT development.

     d) Learning Objectives

     Learning Objective - Provide Federal DOD Agilests ways communicate to Military decision makers that Agile Scum is OODA MDMP only by different terms. It is nothing new just being applied differently using a new vocabulary of terms.

     Outcome - Present the similarities of Agile Scrum vs traditional proven Military Decision Making Processes.

     Outcome - Provide bridge of understanding between AGILE SCRUM and OODA & MDMP for Military and DOD contractors that are unfamiliar with the Agile methodologies.

     Outcome - Present talk tracks and narratives that demonstrate how the Agile Methodology complements MDMP.

     e. Target Audience

     The primary level of audience understanding and comprehension is Level 3. Performing – Target audience is experienced Scrum/agile practitioners (2+ years)

     This is a very focused / specialized session for those that can apply the lessons. However it is a very cool session for those that just want to sit in and see how Scrum is applied in Aerial Combat dogfights and Agile in the broader war fighting process.

    For those in the Federal DOD game the take always are to provide several narratives to leverage for agile transformations within the Federal and DOD space targeting DOD Military decision makers in order to break down transformation barriers and perceived risk.

     f) Information for Review Team – Link to Presentation: https://onedrive.live.com/view.aspx?cid=FEDBE246E52347F9&resid=FEDBE246E52347F9!1092&app=WordPdf

     g) Presentation History –

     This presentation was given at the Agile in Government Summit in Washington DC 2014. It has been well received within the Federal and DOD space.

    It is new thought capital and slide ware that has not been presented to a general agile audience.

     About the presenter: This lecture is presented by LtCol Tom Friend USAF Retired. A US Military Combat veteran, Pilot, Squadron Commander that has operational experience in the Navy, Air Force, and served on the ground with the Army and Marines as Forward Air Controller. He is a distinguished graduate from Air War College and has a BS in Aeronautics. On the federal side he is a graduate of The Army Logistic Management College in federal contracting. He has served as a Federal Acquisition Program manager and Acceptance test pilot at a US Military aircraft manufacturing facility. He additionally has 20+ years as a project manager and 10+ years of Agile XP SCRUM software development experience within various IT markets.

  • Liked Cindy Shelton
    keyboard_arrow_down

    Cindy Shelton - Retrospective: An Agile Failure in Government Application of Agile

    60 mins
    Workshop
    Intermediate

     

    Unfortunately, much too often everyday practice deviates undesirably from ‘‘best practice’’ or what is considered optimal.  While we don't like to admit it, there ARE failures and challenges in applying the Agile philosophy to the US Government and other burecratic organizations.  This working session uses the Agile practice of a Retrospective with the attendees as the team to explore those challenges and actions to take in the next "iteration."