From hours to minutes: How to write test cases for a faster regression suite?

location_city Virtual Platform schedule Sep 11th 04:00 - 04:45 PM IST place Online Meeting 1 people 48 Interested

Teams that release multiple times in a day cannot afford to have their regression suite run into hours. Most of these teams run their entire suites in less than 15 minutes. While infrastructure and configuration help bring this number down, the largest gains come from writing test cases that can cope with the requirements of a setup that runs so fast.

In this session, I’ll talk about:

  1. How to identify test cases that need refactoring

  2. How to write atomic test cases

  3. How to use selenium locators smartly

  4. Using product APIs to switch between user states

  5. Using the right Selenium waits

  6. Parallel execution and queuing


Outline/Structure of the Talk

Intro [5 mins]

Writing faster cases [15 mins]

Ideal test case characteristics and examples of implementing them (sample content shared in the end):

  1. Repeatability

  2. Reusability

  3. Accuracy

  4. Traceability

  5. Atomicity

How to determine cases which need refactoring:
Here I will talk about Open Source tools which you can use to identify cases which take longer than normal to run.

How to use locators smartly

I will present a statistical comparison of using locators by ID, name, and CSS. We’ll compare the speed of each one of them and see which is the best option.

Faster test execution [15 mins]

Using product APIs’ smartly

The right set of exposed APIs and can make functional test cases lighter. Here, I will talk about how product APIs can help with the right user states and allowing to run UI tests much faster.

For example, if I have tested the UI for a particular flow till the checkout page for a specific payment method, I do not need to test it again. I can use the right APIs to replicate the checkout state with the product and sign-in state. The only part that the test will run for is the new payment method that I want to test.

Using the right Selenium waits

I’ll talk about the different types of waits and how to use them in the right places. I’ll also show some open source tools you can use to configure these properly.

Parallel test execution

Here I’ll talk about how one can increase parallelization in their test cases. Furthermore, I’ll discuss how one can control Jenkins jobs better by configuring the master and slaves.


Finally, I’ll also talk a little bit about how you can utilize queuing smartly by separating queues for your flaky tests vs normal test cases.

Conclusion [5 mins]


{Sample content for atomicity - this will give you an idea of the sort of examples I will present in the talk}

Atomicity in test cases: Test cases should ideally test a single functionality. This helps to pinpoint the exact failures, keeps the test cases fast and small. It also reduces flakiness and maintenance.


Consider a flow when user purchases a product from any shopping website. If such a test were to be done manually, the steps involved would be as follows:    

Search the Product
Add to Cart
Enter card details and purchase

The above scenario can be split into smaller (atomic) tests as below:
1. Test "SignUp" / Test "SignIn"
2. Test "Search"
3. Test "Add to Cart"
4. Test "Purchase"

To test purchase scenarios only, one can simulate users' state. Thus, if there is an issue with signup functionality, it won’t affect the execution of other functionalities.

Learning Outcome

This session will have takeaways that you can start using the next day. From atomic test cases to switching between user states, I will discuss small wins that can make your regression suite run faster and leaner.

Target Audience

QA/QA Managers/Engineering Managers


schedule Submitted 3 years ago

  • Praveen Umanath

    Praveen Umanath - State-of-the-art test setups: How do the best of the best test?

    20 Mins

    The best engineering teams are releasing code hundreds of times in a day. This is supported by a test setup that is not just fast, but robust and accurate at the same time.

    We look at data (anonymized) from millions of tests running on BrowserStack, to figure out the very best test setups. We also analyze the testing behavior of these companies—how do they test, how frequently do they test, how many device-browser-OS combinations do they test on. Do they gain speed by running more parallels or leaner test setups?

    Finally, we see how these steps help these teams to test faster and release continuously, and how it ties-in to the larger engineering strategy.

  • Chitvan Singh

    Chitvan Singh - Are you measuring your QA team's efficiency correctly?

    20 Mins

    'Quality with speed' is something everyone aspires to. But when it comes to QA teams, how do you define quality and how do you measure speed? Every QA team aims to deliver high-quality releases as soon as they can, but how do you define success for them?

    In this session, we will talk about easily actionable ways to measure the effectiveness of testing teams—quality, efficiency, and impact. We also look at key testing metrics and performance indicators around defects, test cases, and test economics. We will also see how these numbers line up with company objectives and other softer aspects of organizational dynamics.