8 guiding principles for Agile Coaches (or change agents) from the Spotify Ads R&D Agile Coaching team

location_city Online schedule Nov 18th 05:15 - 05:35 PM IST place Zoom people 155 Interested

An introduction and explanation of 8 guiding principles design by the Agile Coaching team for Spotify Ads R&D and how they might help you with your own change efforts

  1. We are more impactful with both team-level insight AND leadership relationships;
  2. We should not become operational (or at least be careful about becoming operational);
  3. Focusing too much on short-term tactical wins limits the ability to have sustained impact;
  4. Coach (aka change agent) collaboration is more effective than silos;
  5. Results are for the short-term; systems and habits are for the long-term;
  6. Involving leaders (both formal and informal) in both brainstorming and implementation makes improvement faster;
  7. Coaching structure should follow coaching strategy; coaching strategy should follow product/business strategy;
  8. Sharing work and successes should be intentional, not just organic.

Outline/Structure of the Talk

  1. (2 min) Brief overview of concept of talk. These are guiding principles we design to attempt to amplify the impact of our coaching team. They should be useful for any group engaging in change or transformation.
  2. (16 min / 2 min each) Walkthrough each of the 8 guiding principles describing what it means, why it was included, and how it might be more generally applicable;
  3. (2 min) Concluding summary

Learning Outcome

  • 8 guiding principles for Agile Coaches (or change agents)

Target Audience

Agile Coaches; change agents; directors/managers who manage coaches



schedule Submitted 1 year ago

  • 45 Mins

    In speaking about better ways of thinking and problem-solving, Linda has introduced Jonathan Haidt's model for the brain. He proposes that the rational, conscious mind is like the rider of an elephant (the emotional, unconscious mind) who directs the animal to follow a path. In Fearless Change, the pattern Easier Path recommends making life easier to encourage reluctant individuals to adopt a new idea. Linda suggests that in conversations with others who see the world differently, we "talk to the elephant" instead of the "rider." That is, don't use logic or facts, but appeal to the emotional brain of the listener as well as making the path more attractive. There is always the question: What's the best way to talk to the elephant? This presentation will provide some answers. Linda will present the best elephant-speak and outline suggestions for providing an Easier Path.

  • Jason Yip

    Jason Yip - Experimenting with BAPO in Spotify Ads R&D: aligning product strategy, technical architecture, ways of working, and org structure

    Jason Yip
    Jason Yip
    Staff Agile Coach
    schedule 1 year ago
    Sold Out!
    20 Mins
    Experience Report

    BAPO stands for Business Architecture Process Organisation. It is Jan Bosch's more fleshed out expression of "structure should follow strategy". I recently experimented with applying this framework within Spotify Ads R&D and would like to share what worked and what didn't. Concepts expanded beyond BAPO to include product capabilities versus architecture services; overlapping product lifecycle s-curves; Simon Wardley's Pioneers, Settlers, Town Planners; and a reframing of the teaching people how to fish metaphor. Beyond sharing my successes and failures, this session will also encourage attendees to sketch how they might try this framework in their own context and anticipate what issues may appear.

  • Kelsey van Haaster

    Kelsey van Haaster - Passwordless: a story of risk, protection and excellent UX

    45 Mins
    Case Study

    The June 2017 NIST special publication 800-63B, covering Digital Identity, turned what had previously been the gold standard for passwords on its head. For the first time, NIST recommended removing complexity rules and password cycles, supporting longer passwords, no restriction, or requirements  on special characters and preventing the use of  common passwords and those already exposed in a known breach. Why these changes? Because with the best will in the world, the human element in our security measures is always going to be the weakest link. Forcing individuals, particularly those whose primary role has nothing to do with Information Technology, to remember hundreds of unique complex passwords is hard. They don’t want to and when we make them, they get it wrong or look for an answer with as little friction as possible. NIST’s new guidelines are intended to remove some of that friction. When combined with the use of a password management system and multi factor authentication, we might hope that our corporate assets are no longer protected by the same password someone used on their favourite shopping site. 


    Unfortunately, things are never that simple. For non-technical users, even working with a password manager can present challenges. Not all systems play nicely with password managers, and they also do not stop a user from using the same credential for more than one product. 


    Passwordless authentication is one exciting way forward. This in itself, is not new technology, having been around in various forms for a while - think magic email links for example - but the approach still relies upon shared secrets. However, the release of the WebAuthN standard by the W3C and FIDO, supported by many key vendors, allows us to take advantage of public key cryptography.

    At ThoughtWorks we have embarked on a journey to introduce passwordless login to our employees, particularly those with high value accounts and who may be less technical than many. The goal of this session is to share what we have learned throughout this process. We will share our goals, challenges and their resolutions. We hope attendees will be inspired to evaluate this technology, which delivers the rarest of things, better security and a fantastic user experience.     

  • Naresh Jain

    Naresh Jain - Technical Debt Prioritisation - Identifying And Fixing Highest ROI Issues

    Naresh Jain
    Naresh Jain
    schedule 2 years ago
    Sold Out!
    45 Mins

    Does your technical debt backlog look endless? Are you thinking about pausing feature development to resolve tech-debt? Stop. What if I told you that a good chunk of your backlog can simply wait? Tech-Debt can seem overwhelming when we look at it as a loosely organised list. This can lead to several anti-patterns in how we deal with them. Attend this session where I will be sharing strategies we have been leveraging to identify high priority tech-debt items to make sure we are able to continue feature development while improving code health.

    Problem Statement: Tech-Debt often accumulates until productivity takes a serious hit and then as a knee jerk reaction we try to clean it up all at once. At this point there is just one large list of issues with a loose sense of priority. Net net it gives an impression that we have a huge backlog. This can lead to several anti-patterns.

    • Chicken and Egg Problem - Too much Tech-Debt, so feature development is slow. Since feature development is slow, we cannot set aside time to fix issues.
    • Fixing the wrong issues - In the larger scheme of things, it may be counter-productive to fix low priority issues just because they are easy.
    • Pausing Feature Development - Approaches such as "Technical Debt Sprints" where we pause features to resolve tech-debt are not be sustainable even if they offer some short term benefits.
    • Local Optima - Patchy cleanups which lead to uneven code health across the code base.
    • And many more

    Solution: Understand the impact of each Technical Debt at Block, Category and Item level to narrow your backlog to those issues which matter. While there are several tools that help us identify tech-debt it is up to us to map our context on those issues. Attend this talk where I will be going over how to VISUALISE, TRIAGE, PRIORITISE and STRATEGISE in order to get a realistic view on your tech-debt. Also I will be sharing my experience about how we have been leveraging time boxes and capacity constraints as a tool to make sure we are only working on the most important tech-debt issues.

    Topics that will be covered:

    • Tech-debt resolutions strategies - Anti-Patterns
    • Tech Debt - Understanding Size vs Impact
    • Tech-Debt Manifestations - Matrix view of areas of code and types of problems.
    • Visualise - Triage - Prioritise - Strategise
      • Visual techniques to understand your tech-debt backlog with code analysis tools - Examples with popular tools
        • Bubble Charts - Coverage vs LOC, Maintainability vs LOC etc.
        • Git History - Multiple ways of looking at changes to a piece of code
        • Active Code Paths - Mapping usage to issues
        • Mapping Project Management Data - Bugs, Stories that touch a piece of code
      • Triaging the backlog to quickly eliminate tasks that can wait
        • Block level - Leveraging Logical Architecture
        • Category vice - Example: Front-End, Backend, API etc.
        • Item vice elimination
        • Refactor vs Rewrite
      • Prioritising tech-debt with layers of detail such as - Churn, LOC, Coverage, Bugs etc. Hotspot Identification.
      • Strategise - Approach to resolving each tech debt item based on Tech-Debt Manifestation Matrix
    • Tech-Debt resolution - Hypothesis based, Data Driven approach
      • A template to capture your hypotheses, experiments and learnings
      • Visual confirmation that the issue is resolved
      • Just-Enough resolution - The uncomfortably short time-box - Imposing constraints to avoid runaway clean-ups
      • Guard rails to avoid a repeat of the same issue
    • Incorporating tech-debt resolution into your Iterations, Weeks, Sprints, etc.
      • Identifying the right cadence on how often you fix debt - Hours per Day, Days per Week etc.
      • Tech-Debt back log Grooming
      • Cycling through categories of tech-debt
    • Measuring Progress
      • Short-term - Measuring immediate impact on the code
      • Medium-term - Productivity improvements (Readability, Issue resolution time, etc)
      • Long-term - Team Health - Knowledge Silos, New Team Member Onboarding Time
  • Gunnar Grosch

    Gunnar Grosch - After CI/CD, there’s now Continuous Configuration

    45 Mins

    In the last decade, the movement towards CI/CD has been transformational for getting value out to customers quickly. But in recent years, there has been new processes and tooling towards using configuration post-deployment, in the form of feature flags, operational config, or other runtime configuration. Continually adjusting the configuration to update and tune your code in production is a powerful, fast, and safe way to deploy value to customers. Join us in a discussion about how Amazon uses Continuous Configuration tools at scale to move fast and ensure maximum availability of our services.

  • 45 Mins

    Have you heard of “flaky tests”?

    There are a lot of articles, blog posts, podcasts, conference talks about what are “Flaky tests” and how to avoid them.

    Some of the ideas proposed are:

    • Automatically rerun failed tests a couple of times, and hope they pass
    • Automatically retry certain operations in the test (ex: retry click / checking for visibility of elements, etc.) and hope the test can proceed

    Unfortunately, I do not agree with the above ideas, and I would term these as anti-patterns for fixing flaky / intermittent tests.

    We need to understand the reasons for flaky / intermittent tests. Some of these reasons could be because of issues like: 

    • timing issues (i.e. page loading taking time)
    • network issues
    • browser-timing issues (different for different browsers / devices)
    • data related (dynamic, changing, validity, etc.)
    • poor locator strategy (ex: weird & hard-wired xpaths / locators)
    • environment issue
    • actual issue in the product-under-test

    In this session, with the help of demos, we will look at the following techniques you can use to reduce / eliminate the flakiness of your test execution:

    • Reduce number of UI tests
    • Use Visual Assertions instead of Functional Assertions
    • Remove external dependencies via Intelligent Virtualization

    Demo will be done using the following and sample code will be shared with the participants

  • Hari Krishnan

    Hari Krishnan - Performance Testing on your Local Machine - The Art of Identifying Performance Issues early in your Development Cycle

    180 Mins

    Does your team have to deal with performance issues very late in their development cycle? Does this lead to a lot of unplanned work in your sprints? What if I told you, that your team can validate various performance-related hypotheses right within your sprints? Yes, this is what we have been practising on various teams. Participate in this workshop where I will share our experience and to learn the techniques involved through hands-on exercises.

    Problem Statement: Performance Testing has traditionally been an activity that is done in a staging or prod environment (for the brave) by a team of expert performance testers. In my experience, this approach has several issues.

    • Typically high cycle time (time taken between code changes and these changes being deployed and tested in Perf Test Env) between test runs. This means Developers cannot experiment quickly.
    • The test design may be disconnected from the system design because the people who test it may not have a deep understanding of the application architecture.
    • Performance benchmarking and tuning becomes an afterthought, instead of being baked into our design and constantly validated during the development process

    Solution: Apply "Shift left" to your Performance Testing

    • Enable Developers to run Performance Tests on their machines so that they can get immediate feedback as they make code changes.
    • Identify issues early and iterate over solutions quickly.
    • Only defer a small subset of special scenarios to the expert team or higher environments.

    Talk is cheap, show me code
    I will be sharing the learnings that I gained in the process of applying Shift Left principle to "API Performance Testing" and how we codified the approach into a re-usable open-source framework called Perfiz so any team can take advantage of it.

    Topics that will be covered

    • Challenges running performance tests early in the development cycle
    • Few examples to see Shift Left in action
      • Hypothesis Invalidation Strategy. A scientific approach to reducing dependence on higher environments
      • Avoiding premature performance optimisations and moving to data driven architecture decisions with rapid local performance testing
    • What makes a good API Performance Testing framework? - In the context of Shift Left
      • It is containerised, runs well on local laptop and in higher environments or in hybrid mode
      • Leverages existing API test instead of duplicate load test scripts
      • Helps Developer express load as a configuration DSL without having to learn yet another tool
      • Not just a load generator, it collects data, has pre-set dashboards with basic metrics
      • It is code and not documentation
    • What makes a good performance test report? - In the context of Shift Left
      • To begin with it should be a live monitoring dashboard and not an after the fact report
      • It is visual (graphs and plots) rather than tabulation
      • Merges Load Data and Application performance metrics in a single visual representation over a shared time series based x-axis so that the correlation is clear
    • Perfiz Demo - An open source tool that embodies the above thought process
      • API test to Perfomance Test Suite in less than a minute with just YAML Config
      • Pre-built Grafana Dashboards to get you started
      • Containerised setup to get you going without any local setup with Docker
      • Prometheus and other monitoring tool hooks to monitor app performance
      • Perfiz in Higher Environments
      • Perfiz Architecture Overview and how you can extend, adapt and contribute back to the community
    • "Shift Left" limitations - Repeatability, My machine vs your machine, etc.
  • Hari Krishnan

    Hari Krishnan - Continuous Evolution Template - A Hypothesis Driven Approach to Avoid Guesswork

    20 Mins

    Do your technology and product decisions involve a lot of guesswork? Has this led to anxiety about possible failure? Attend this talk where I share my experience leveraging a hypothesis driven learning oriented approach to de-risk such scenarios. "Continuous Evolution Template" embodies this approach to help achieve better clarity at an individual level and helps keep stakeholders involved in the process.

    Problem Statement: As experienced Engineers, we are expected to make decisions with very little upfront information. However jumping from Problem to Solution based on guesswork can lead to

    • Poor outcomes (suboptimal or excessive designs)
    • In retrospect these instinct based decisions can look quite irresponsible despite best intentions
    • High Anxiety levels at team and individual level because guesswork leaves a lot of room for unexpected failures late in the cycle.
    • Poor predictability for stakeholders

    Solution: We need a way to prevent the guesswork while facilitating a scientific approach to solutioning. Continuous Evolution Template encourages this by providing a lightweight structure around hypothesising and learning to minimise guesswork. It also helps better articulate our thought process in arriving at a solution. I came up with this template as a mechanism to add basic rigour in listing Problem Statements, Hypotheses, Design Experiments etc. and we have been using this on several projects. In this talk I will be sharing my experience with some real world examples where it was immensely helpful.

    Topics that will be covered

    • Understanding the urge to solution without detailed analysis - How to counter these?
    • Hypothesis 101 - Quick recap of your science class and how it relates to Software Architecture and Design Decisions
    • Authoring Hypothesis - analysing problem statements without the pressure of solutioning
    • Designing Fail / Learn Fast Experiments to validate / invalidate Hypothesis - selecting metrics, success / failure criteria, limiting variables
    • Learning from the experiment and feedback into Hypotheses list
    • Continuous Evolution Template - Templatizes the above techniques in spreadsheet format. Understanding the columns in the template and how to populate them with examples.
    • Applying Continuous Evolution Template to various types of problems - Technology (Tech Debt Management, Database Optimisation, Scaling, etc.) and Product (Feature Progression, Conversion Optimisation)
    • Keeping cycle time under control with Hypothesis and Experiments
    • Involving all stakeholders in the process to improve visibility on progress while we are narrowing down on a Solution
  • Jason Yip

    Jason Yip - A SWOT assessment of large digital-native firms vs digital transformation followers

    Jason Yip
    Jason Yip
    Staff Agile Coach
    schedule 1 year ago
    Sold Out!
    20 Mins

    I've worked 6 years in the "tech industry" at Spotify and before that 14 years consulting with ThoughtWorks at "legacy" industries. Large digital natives have habits that typical digital transformations do not emphasise enough (a lot of focus on strategic growth while still managing short-term cash flow; delegating accountability to scale agility; aggressive decoupling to avoid coordination; experimental mindset - willingness to take bets) but they also have vulnerabilities that legacy industries should not repeat and can possibly exploit (over-specialisation; over-emphasis on individuals over teams; over-emphasis on celebrating status and perks over stakes and outcomes).