Mobile end to end testing at scale: stable, useful, easy. Pick three.

schedule Sep 10th 10:45 AM - 11:30 AM place Grand Ballroom 1

This talk is about how Facebook turned a great idea with a terrible track record into a great tool for thousands of developers.

The promise of E2E testing — complex, real-world test scenarios from the point of view of and end user — is appealing.
Many attempts have been made over the years at automating large parts of companies' and developers' testing and release processes, yet most of these efforts ended up in bitter and hard learned lessons about the inherent challenges of the whole approach.

My work at Facebook over the last two years has been making mobile end to end testing at scale a reality.
When others said it couldn't be done, or fell by the wayside, we relentlessly pushed forward, solving problems deemed intractable, and finding new, untold vistas of horror before us

We've come a long way: E2E testing is now an integral part of Facebook's mobile development and release process.
We'll cover what challenges we faced, and how we chose to solve or make them irrelevant.

 
6 favorite thumb_down thumb_up 2 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/structure of the Session

- Introduction

- Development and release processes at FB

- Challenges of E2E testing at scale

- How to make E2E testing stable, useful, easy

- What's next?

- Conclusion

 

Learning Outcome

Successfully applying end to end testing at scale is a hard problem, but we did it. So can you.

Target Audience

Software engineers, anyone interested in testing at scale

schedule Submitted 2 years ago

Comments Subscribe to Comments

comment Comment on this Proposal
  • Dave Haeffner Test
    By Dave Haeffner Test  ~  2 years ago
    reply Reply

    This sounds like an Experience Report. Can you please change your talk type to that? Also, are there any open source libraries that came out of your experiences that you'll be able to share?

  • Anand Bagmar
    By Anand Bagmar  ~  2 years ago
    reply Reply

    will you be sharing some specific challenges and solutions you came up with to solve the problems?


  • Liked Anand Bagmar
    keyboard_arrow_down

    Anand Bagmar - To Deploy or Not-to-Deploy - decide using TTA's Trend & Failure Analysis

    Anand Bagmar
    Anand Bagmar
    Director - Quality
    Vuclip Inc.
    schedule 2 years ago
    Sold Out!
    45 mins
    Demonstration
    Intermediate

    The key objectives of organizations is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!

    In order for these organizations to to understand the quality / health of their products at a quick glance, typically a team of people scramble to collate and collect the information manually needed to get a sense of quality about the products they support. All this is done manually.

    So in the fast moving environment, where CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury, how can teams take decisions if the product is ready to be deployed to the next environment or not?

    Test Automation across all layers of the Test Pyramid (be it Selenium-based UI tests, or, xUnit based unit tests, or, Performance Tests, etc.) is one of the first building blocks to ensure the team gets quick feedback into the health of the product-under-test. 

    The next set of questions are:
        •    How can you collate this information in a meaningful fashion to determine - yes, my code is ready to be promoted from one environment to the next?
        •    How can you know if the product is ready to go 'live'?
        •    What is the health of you product portfolio at any point in time?
        •    Can you identify patterns and do quick analysis of the test results to help in root-cause-analysis for issues that have happened over a period of time in making better decisions to better the quality of your product(s)?

    The current set of tools are limited and fail to give the holistic picture of quality and health, across the life-cycle of the products.

    The solution - TTA - Test Trend Analyzer

    TTA is an open source product that becomes the source of information to give you real-time and visual insights into the health of the product portfolio using the Test Automation results, in form of Trends, Comparative Analysis, Failure Analysis and Functional Performance Benchmarking. This allows teams to take decisions on the product deployment to the next level using actual data points, instead of 'gut-feel' based decisions.

  • moiz
    moiz
    Software Engineer
    Saucelabs
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Beginner

    Appium, often dubbed "Selenium for mobile", at heart its a web server written in NodeJs. Its architecture is modular, which means that it is composed of many small, independently maintained and tested modules. Testing Appium is challenging, but clearly very important, since thousands of users depend on it for their testing. Appium also has all the usual challenges of a large open source project, for example, ensuring consistency of JavaScript code style across hundreds of contributors. It's important to have high-quality and readable code.

     
    I will be discussing approaches to and strategies for testing these kinds of large, modular applications. On the Appium team, we use a combination of unit, functional, and integration tests. Modern services like GitHub, Travis CI, and Sauce Labs make it possible for large open source projects to be tested thoroughly, keeping the code and the app at high quality. I will also discuss the use of tools like JSLint and Gulp, which help prevent code style issues.
     
    Testing the tool which is used for testing is clearly very important. This talk aims to showcase how testing should be approached for large, modular projects which has many collaborators.
  • Liked James Farrier
    keyboard_arrow_down

    James Farrier / Xiaoxing Hu - Making Your Results Visible - A Test Result Dashboard and Comparison Tool

    45 mins
    Demonstration
    Intermediate

    If a test fails in the woods and no one is there to see it does anyone care, does anyone even notice. What happens when failing tests become the norm and you can't see the wood from the trees? 

     

    After watching last years Allure Report presentation I was inspired.  Selenium tests (and automation tests in general) are often poorly understood by the team as a whole.  Reports/emails go unread with tests failing becoming an expected outcome rather than a glaring red flag.  We looked at what Allure brought to the table and from that base created a dashboard which was designed to:

    • Display the results of test runs in a way that was useful to managers, testers and the rest of the development team.  Including tools to filter out specific test runs and view the overall trend of the test run results.
    • Make debugging tests easier by grouping errors, displaying history of test results, filtering tests and offering visual comparison of test runs.
    • Help mitigate the problems flaky tests cause with test run result reporting (say that three times fast).
    • Help with our mobile device certification process, by easily providing a view to compare test runs across devices.

    Since it's creation the dashboard has been used and praised by managers through to developers.  With our full suite of tests from unit to integration to selenium and appium being stored on the dashboard.  We've managed to:

    • Decrease the time taken to debug test cases.
    • Increase the visibility of all our test suites, with managers having a better idea of how our selenium test suite is progressing and testers better understanding the coverage of unit tests.
    • Focus the organization on quality.

    We are working with legal at present to have this project open sourced and available to all prior to Selenium Conf 2015.

  • Liked Ragavan Ambighananthan
    keyboard_arrow_down

    Ragavan Ambighananthan - Distributed Automation Using Selenium Grid / AWS / Autoscaling

    45 mins
    Talk
    Advanced

    Speed of UI automation has always been an issue when it comes to Continuous Integration / Continuous Delivery. If UI automation suite takes 3 hours to complete, then any commit happens during this time will not be visible in test environment, because the next deployment will happen only after 3 hours. 

    With 2000+ developers and average 250+ checkins per day, the above issues is replicated 250+ times every day. This is not productive and feedback cycle is super slow!

    Another issue is , with 35+ different project teams using 10 or more different jenkins jobs to run their UI automation. So many jobs means (350+), individual teams need to go through the pain of managing their own jenkins job, its a duplicate effort and waste of time. Automation teams need to spend time on writing reliable automation and not managing jenkins jobs.

    Solution is to reduce the UI automation run time from hours to minutes and also use only handful of jobs to run the Distributed Automation!

    Goal: To run all UI automation scenarios within the time take by the longest test case

  • Liked rajesh sarangapani
    keyboard_arrow_down

    rajesh sarangapani / Prabhu Epuri - Visualizing Real User Experience Using Integrated Open Source Stack (Selenium + Jmeter + Appium + Visualization tools)

    45 mins
    Demonstration
    Advanced

    Traditional approach in performance testing does not include client side processing time (i.e. DOM Content Load, Page Render, JavaScript Execution, etc.) as part of response times, performance tests has always been conducted to stress the server so tools like Jmeter have been very popular to execute tests. With increasing complexity of architectures (Web, Browser, Mobile) on the client side it has been important to understand the real user experience.   Commercial tools have started to provide features that can provide insights into real user experience after the bytes are transferred to the client end.  With the ability to call Selenium scripts via Jmeter the ability to conduct real user experience tests using open source stack has opened up new avenues to comment on real user experience.   This enables us to comment on

    • Provides Page load times similar to On Load time of real browsers
    • Generates HAR file with following statistics
    • Details of summary of request times and content types
    • Waterfall chart with page download time breakdown statistics such as  DNS resolution time, Connection time, SSL handshaking time, Request send time, wait time and receive time.

    By integrating the open source stack tools it enables us to provide the same insights which a commercial of the shelf tools would offer.   At Gallop we have implemented this at multiple clients providing them insights into various bottlenecks at the client side which helped us to provide greater value proposition

  • Liked James Eisenhauer
    keyboard_arrow_down

    James Eisenhauer - An Introduction to the World of Node, Javascript & Selenium

    45 mins
    Talk
    Beginner

    Ever wanted to write Selenium code in Node.js?  There seems to be a new javascript library written every hour!  Entering the world of Node.js can be a daunting task.  This session will teach you everything you need to know to make the right decisions when selecting what libraries you should implement on your new Node.js Selenium project and what the possible challenges will be.

     

     

  • Liked Russell Rutledge
    keyboard_arrow_down

    Russell Rutledge - Blazing Fast UI Validation - 5000 Reliable Tests in 10 Minutes on One Machine

    Russell Rutledge
    Russell Rutledge
    Senior Technical Lead
    Nike
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Advanced

    A big blocker for putting a website on truly continuous production delivery is the amount of time it take to validate that the site works correctly.  Tests themselves take time to run, and test results are unreliable to the point where it takes a human to investigate and interpret them.  When counting the time that it takes to both run and interpret results, test runs for an enterprise web site can take an entire day from inception to useful result.

    This session describes common points of failure in test execution that add both latency and unreliability and what can be done to overcome them while still preserving the value of UI validation.  We'll discuss why, after addressing these concerns, UI can be unblocked to reliably field thousands of validation scenarios on a local machine in a matter of minutes. 

  • Liked Titus Fortner
    keyboard_arrow_down

    Titus Fortner - What Are We Testing, Anyway?

    45 mins
    Talk
    Intermediate

    Testing strategies and the role of DOM to Database Testing in a world of micro services and client side MVCs.

    The trends in software development are making UI testing increasingly difficult. Sites are leveraging more dynamic interactions and moving toward Single Page Applications. Gone are the days when the term “and the page finishes loading” makes any sense. This shift is dramatically increasing the number of flaky tests as well as the costs of such testing relative to the benefits, leaving many organizations wondering if they are worth doing at all. 

    The approach to testing that is “good enough” for any given organization is going to vary by context. In this talk, I’ll cover some different testing options and the advantages and disadvantages to each. We’ll discuss the dangers of mocking and stubbing, the problems with relying on testing journeys, and dealing with bloated test suites that are difficult to maintain.

    Another trend in software development is away from monolithic architectures and toward micro services and service oriented architectures. This approach provides opportunities for decreasing the costs and overhead of UI testing while still maintaining all of the benefits of DOM to Database verification.

  • Liked Justin Ison
    keyboard_arrow_down

    Justin Ison - Android Mobile Device Grid & CI - Getting Started

    Justin Ison
    Justin Ison
    Senior Software Engineer
    Microsoft
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    In the modern era, we have many different cloud testing services to choose from. These cloud services are useful and help reduce the burden of building and maintaining your own Selenium Grid environment. However, there are many scenarios in which you need your tests running locally and quickly, such as you work for the government (or agency), you have sensitive software/data you cannot expose to the cloud, or service costs are too expensive for your organization.

    This presentation will feature getting started with setting up your own mobile device grid, running your tests in parallel, running in CI (Jenkins), and the lessons I have learned along the way.

     

  • 45 mins
    Talk
    Intermediate

    Responsive Website Design have enabled mobile phones and tablets to fundamentally changed how we interact with the internet. Now we have instant access to any website we choose to visit and this causes headaches for testers, especially automated testers.

    This changes how automation, specifically Selenium, is implemented as the test suite needs to be maintainable, which is difficult and will get unruly if not maintained.

    The talk will be specifically about responsive websites however the same techniques can be applied to native app testing.

    Utilizing a test case generator allows for the test conditions(browser, OS and resolution) to exist outside of the test itself allows a single test to be able to test against all testing combinations without having to code for the other options explicitly. With the different options outside of the test the driver is easily instantiated and the browser windows are modified prior to test execution.

    As a sole(or two man team) automated test engineer for 4 years in over 5 projects these are my tools and techniques on how to make your automated test suite not only maintainable but adaptable to any device that you need to test with minimal overhead.

  • Liked Jason Watt
    keyboard_arrow_down

    Jason Watt - Challenges of the Mobile Cloud

    Jason Watt
    Jason Watt
    Senior Software Engineer
    Salesforce
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    Creating a mobile app is now the new cross platform problem. The major mobile platforms tend to gear their development tool chain towards individuals and their workstations.  But what if you want to introduce a CI solution to this environment? What if your app is launching on more than one platform and there's a team of 20+ developers working on it? What if your tests are more than just Selenium based?

    This is normally where you can look to the cloud for scale but mobile has a ton of challenges to do so.  Come and learn from some of the challenges and pitfalls I've encountered while working towards this goal.