When it comes to most things in life, most people tend to think more is better. But does this maxim hold true to automated testing? Should you test every possible browser/OS combination possible with every functional workflow because an executive thinks it’s a good idea? Does this mean you need to build the biggest Selenium you can grid to test on the most OS and browser combinations as possible? Or maybe even leverage a 3rd party infrastructure solution?
Testing on as many platforms as possible may not always be the best approach to test execution, even though it may seem that way at first. The best approach is to test strategically, not just indiscriminately testing as much as possible just because you can. Sometimes this means going big and testing at a massively parallel scale, others times this may not be the best approach.
When considering how much to parallelize your tests, there are many things to think about, including how well your framework supports parallel execution, how robust your execution environment is, and how much load your non-prod environments can handle. All of these factors will impact how to parallelize your tests.
I will discuss the best approach to determine how to optimize test execution parallelization, both in terms of what considerations and tradeoffs to make and also how to set implement parallelized testing common frameworks.
Topics that I will cover include:
- Use of Google Analytics and other site data to drive platform testing needs
- Test structure & framework choice and their impact running tests in parallel
- Situations when massively parallel testing is appropriate & pitfalls of over-parallelization
- Determining best approaches and coverage models for unit, smoke, integration, and regression testing.
- A brief demonstration of parallelization approaches in several common frameworks, taking theory and putting it into action