location_city Washington DC schedule Oct 24th 11:00 - 11:45 AM EDT place Room 6

You're a Certified Scrum Master. Perhaps you are an Agile Manger, Agile Coach or Facilitator.

Maybe you are newly minted or maybe you've been doing it a while, but either way you've noticed that not everything seems to work according the way the training or certification class implied it should.

In this session, Camille Bell will explore what you weren't told in training, but need to know. Such as:

  • What assumptions Scrum makes that may not apply to your company or organization
  • Why some types of teams should not use Scrum and what they should use instead
  • How soon Scrum of Scrum stops scaling and what to use when it doesn't scale
  • Why some teams don't improve despite holding retrospectives
  • How to recognize the hockey stick burn down and what to do about it
  • What's a WIP limit and when it can be helpful
  • When estimation most helpful, when it's a complete waste and what to do instead
  • Why simple prioritization of a Product Backlog won't generate a Minimal Viable Product
  • Why the As a.., I want.. So that.. user story isn't enough and what you need to add
  • What are the critical missing practices your development team needs

 

 
 

Outline/Structure of the Talk

Pretty standard format: Slides and lecture with questions at the end.

For each issue discussed in the talk:

  • What - general problem description
  • Impact - lesser and greater impacts of that issue
  • Why - likely causes of the issue
  • Mitigation - things to try to solve the issue by targeting each cause or general options for multiple causes or both

Learning Outcome

Attendees will learn:

  • To recognize where Scrum doesn't fit the team with cadence and which alternatives can fit that cadence
  • To recognize when Scrum of Scrum isn't scaling and strategies to solve the scaling problem
  • To understand Scrum's assumptions on the makeup of cross-functional teams and what to do when your company or organization doesn't fit the Scrum model
  • To recognize the hockey stick burn down, why the hockey stick is unhealthy and demoralizing for your development team and what to do about all those stories that pile up at the end of the sprint
  • To improve retrospectives and make them more actionable
  • To understand how to use story sizing to prevent the hockey stick
  • To use Work in Progress Limits and Swarming to prevent the hockey stick
  • To better understand the benefits and limitations of estimates
  • To recognize the real purpose of estimation
  • To recognize when your technical team is spending too much time on estimation for its value
  • To recognize what shouldn't be estimated and what should be used instead
  • To understand the limitations of traditional User Story Workshops and simple prioritization of a Product Backlog won't generate a Minimal Viable Product
  • To recognize the limitations of traditional user story formats and how to define the Who, What and Why of the stories better
  • To understand why Scrum doesn't specify any technical practices
  • To recognize that if your team is doing technical work, it needs technical practices
  • To know which technical practices your team needs and how they can learn them

Additionally attendees will leave with references and links to explore these topics in greater depth.

Target Audience

Primarily: Scrum Masters, Agile Managers and Agile Coaches & Facilitators Also: Product Owners, Agile Tech Leads and Other Agile Leaders

Slides


schedule Submitted 6 years ago

  • Shawn Faunce
    keyboard_arrow_down

    Shawn Faunce - The Awkward Teenager of Testing: Exploratory Testing

    45 Mins
    Talk
    Beginner

    We think we understand that awkward teenager.

    Many experienced testers will claim exploratory testing expertise, but too few have ever written an exploratory testing charter, and even fewer have applied a heuristic in that charter. We think we understand exploratory testing just as we think we understand teenagers, because “we have been there”. However the reality is that many of the words currently used in exploratory testing are foreign to us and we feel awkward about our lack of knowledge. The goal of this talk is to give people experience writing and executing exploratory testing charters, creating mind maps, and applying exploratory testing heuristics.

    The talk is intended to introduce people to the exploratory testing techniques described by Elisabeth Hendrickson in her book Explore It! with some added material from the work of Cem Kaner and James Bach.

     

  • Craeg K Strong
    keyboard_arrow_down

    Craeg K Strong - Behavior Driven Development Workshop

    45 Mins
    Workshop
    Beginner

    Behavior Driven Development / Acceptance Test Driven Development (BDD/ATDD) is a new, exciting approach to developing software that has been shown to reduce rework and increase customer satisfaction. While other testing tools focus primarily on “are we building the thing right?”, BDD tools such as Cucumber and SpecFlow attack the problem of software directly at its source: “are we building the right thing?” By retaining all the benefits of automated unit testing, while extending them upstream to cover requirements, we cut the Gordian knot of risk and complexity to unleash hyper-productivity. 

    Why is BDD so effective?

    • As a form of Test driven design, BDD helps produce frugal, effective and testable software.
    • As a development tool, BDD frameworks like SpecFlow provide many convenience functions and are pre-integrated with powerful libraries like Nunit and selenium to make writing tests a snap.
    • As a collaboration tool, BDD helps ensure the “three amigos” (tester, analyst and developer) sync up – ahead of time.
    • As a facilitation technique, BDD enables product owners to efficiently provide the team with concrete examples that clarify the true intent of a user story and define the boundaries.
    • As a reporting tool, BDD captures functional coverage, mapping features to their acceptance criteria to their test results, in an attractive hierarchical presentation.

    Want functional documentation? How about documentation that is guaranteed to be correct, because every feature maps to its test results? Witness the holy grail of traceability – executable specifications.

    We will spend a few minutes talking about the context and pre-requisites, so attendees have an idea of where BDD fits in, and what type of investment they are signing their teams up for. We will see that in return for a modest amount of investment in tools and training, very significant benefits can be realized, and the benefits compound over time.

    This workshop then dives right in to Gherkin, the structured English language technique used to capture BDD specifications. We will spend the better part of the session learning the tricks and techniques that make for robust and maintainable gherkin specifications. We will review and critique lots of examples, both good and bad.

    We will review several examples of reports generated from BDD tools, to provide context and to immediately highlight the bottom line business value that makes an investment in BDD so worthwhile.

     

     

    Come and learn why Behavior driven design is taking the software world by storm!

  • David W Kane
    keyboard_arrow_down

    David W Kane / Deepak Srinivasan - "Hitting the Target" - Business Value in Mission-Focused Organizations

    45 Mins
    Workshop
    Beginner

    In the simplest of terms, software development decisions for commercial organizations can be reduced to a calculation of whether the cost of developing the software will be outweighed by the estimated revenue generated or costs saved by the software.  However, as Mark Schwartz points out in his book, “The Art of Business Value Paperback” this simple explanation is insufficient for commercial organizations, and not applicable for government and other non-commercial organizations for whom the impact of software isn’t primarily measured in terms of revenue.  

    In this session participants will experience a simulation that has been created to explore these question of how to make decisions about investments to deliver mission and business value by examining the impact of these decisions on the performance of organizations in changing environments.

  • Paul Boos
    Paul Boos
    IT Executive Coach
    Excella
    schedule 6 years ago
    Sold Out!
    45 Mins
    Workshop
    Beginner

    So why does pair programming (or any form of pairing really) work? Well rather than tell you why, let's experience it! 

    This is a simple 3 round exercise that you can do with your teams and managers to demonstrate the benefits of pairing. It will show the linkage between having a shared mental model through collaboration and ease of integrating the resulting work.

  • Tim Gifford
    keyboard_arrow_down

    Tim Gifford - "DevOps" on Day 1 with Operations First Delivery

    45 Mins
    Talk
    Beginner

    DevOps lore tell legendary tales of “Unicorn” companies. We’re told these mythical companies continuously deliver software to production with nary a blemished aura or mussed mane. Can it be so? Is this but a fairytale?

    In this talk, we will dispel the fantasy and show you how to get similar results. Spawning the first unicorn is the most difficult, so I will show you the specific steps, tools, techniques and architectural patterns of Operations First Delivery to create your first “Unicorn” project.

  • M. Scott Ford
    keyboard_arrow_down

    M. Scott Ford - Embracing the Red Bar: A Technique for Safely Refactoring Your Test Code

    45 Mins
    Talk
    Intermediate

    Does your team treat test code differently than production code? Do you let your test code accumulate duplication and complexity that you'd normally attempt to squash in your production code? Have your tests become brittle? Are you worried that they aren't providing you the same value they used to? Have you strongly considered dumping your test suite and starting over? Are you afraid that if you refactor your test code, you'll introduce false positives?

    If you said yes to any of those questions, then this talk is for you.

    We'll explore the technique of "refactoring against the red bar" (http://butunclebob.com/ArticleS.MichaelFeathers.RefactoringAgainstTheRedBar), and how you can employ this technique to confidently refactor your test code. No longer do you need to let your test code have a lower standard of quality than your production code.

  • Dave Nicolette
    keyboard_arrow_down

    Dave Nicolette - When you don't need TDD and why

    Dave Nicolette
    Dave Nicolette
    Consultant
    Neo Pragma LLC
    schedule 6 years ago
    Sold Out!
    45 Mins
    Others
    Beginner

    Ideas similar to test-infected development or test-driven development have been around quite a while - at least since Alan Perlis wrote about interleaving small amounts of design with small amounts of testing in the 1968 Proceedings of the NATO Software Engineering Conference. Yet, even today, there are endless debates about whether such an approach is useful. Some consider it a baseline practice for any professional developer. Others consider it extra work that adds no value. 

    There's certainly more than one way to achieve a goal. What are the goals, when we write and deliver software professionally? Let's identify the various stakeholders of a software system and enumerate the needs of each. Then, let's walk through several popular ways of building software - TDD and others - and see how we can meet those needs using each approach. 

  • Shawn Faunce
    keyboard_arrow_down

    Shawn Faunce / Martin Folkoff - What You are Doing Wrong with Automated Testing

    45 Mins
    Talk
    Beginner

    We firmly believe that automated testing puts the "A" in "Agile". Without an effective suite of automated tests your ability to be truly agile (that is embrace change) can only be based on the hope that your latest change doesn't have unintended consequences. Additionally, without automated tests, you are missing a vital component in getting feedback into the development team's hands. In our travels, we have encountered many organizations that are struggling with automated testing. These organizations are successfully adopting many Agile techniques but are failing when it comes to automated testing. We frequently hear "Automated testing just doesn't work for us" (eerily reminiscent of the days when we would hear, "Agile just doesn't work for us"). From our experience addressing their challenges, we have identified anti-patterns common across these organizations. These anti-patterns look like they should work, but are in fact doing more harm than good.

    This talk is about those anti-patterns. We have given those anti-patterns a name and a face to help organizations understand why they are not getting the benefits from automated testing that others are. We describe several anti-patterns, such as the "Ice Cream Cone", the "Monolith", the "Sunk Cost". We explain why these anti-patterns appear to be good solutions, what makes them attractive, and why they do more harm than good. We talk about the right approach and draw on our experiences helping organizations adopt a robust automated testing strategy that instills confidence and provides fast feedback to the development team. We explain what benefits from automated testing the anti-pattern is preventing. 

  • Craeg K Strong
    keyboard_arrow_down

    Craeg K Strong - Bringing DevOps to an Entrenched Legacy Environment with Kanban

    45 Mins
    Talk
    Beginner

    At a Federal Agency or a large commercial company you may get a chance to work on a major program that makes a real difference in people’s lives.   But the (legacy) software behind such programs is often large and complex, and therein lie some challenges.  Here are some of the challenges we faced on a major 15+ year old legacy system comprised of 2M lines of source code:

    • maintenance costs were escalating
    • It seemed like every time an issue was fixed, it caused two more
    • Lengthy delays between major software releases
    • New releases suffered from high priority bugs that had to be hot-fixed immediately
    • The software was taking longer and longer to be fully tested; in fact, it became practically impossible to test every feature.

    When looking to implement agile practices on a legacy program, it is hard to know where to begin.  Innovative Silicon Valley companies like Etsy leverage DevOps and Continuous Delivery practices to achieve new levels of automation and agility, shrinking development lead times and deploying to production many times each day.  However, it can be a struggle to implement these practices for legacy systems that run our core businesses.  To make matters worse, the agile community offers relatively little practical guidance for implementing DevOps practices in legacy environments.  Fortunately, the Kanban Method provides a practical way to gradually evolve these core systems towards achieving DevOps cost savings and efficiencies, even if you don’t have a massive budget.

    Through a case study involving a criminal justice system for a US government agency, we will examine how the Kanban method helped us identify and remove the barriers that prevented us from implementing DevOps automation for legacy systems.  Just as importantly, Kanban provided the means to measure the efficacy of our efforts, prompting us to course-correct when necessary. We will review some interesting examples using the Microsoft technology stack, but these lessons apply equally to Java, LAMP, MEAN, or any other set of technologies.  The end result was better quality and collaboration and faster delivery of value to our stakeholders.  Perhaps it is possible to teach an old dog new tricks, after all.

help