Reinforced test selection using Recommender Systems

Problem Statement

One of our enterprise oracle customers has more than 30,000+ UI & API regression tests for end to end Apps Integration testing. The average time it takes to run these tests (depends on parallelization) is anywhere between 11-12 hours.

With considerable time to complete all regression tests, the objective was to find a working AI model to identify the probable failing test scripts and prioritize them early along with the most critical business automated tests.

Solution:

Selecting the most promising selenium test scripts to detect application defects may be harder considering the uncertainties on the impact of committed code changes and breakage of many traceability links between code and automated tests. The design was to automatically choose the test scripts and prioritize in CI tool with the goal to minimize round-trip-times between code commits and developer feedback on failed test scripts.

In our devops environment, where new test scripts are created and obsolete tests are deleted constantly, the reinforced method learns to prioritize error-prone test scripts higher under the guidance of a reward function and by observing previous test cycles from historical data. By applying reinforced learning techniques on data extracted, it is evident that reinforcement learning enables better automatic adaptive test script selection and prioritization in CI and automated regression testing.

As the first step, we created a predictive model that estimates the probability of each test failing for a newly proposed build. Instead of defining the model manually, we built it by using a large data set containing results of tests on historical builds/releases and then applying the standard machine learning technique - gradient-boosted decision tree.

Features:

  • Code changes based on build metadata
  • Code Owner Information
  • Historical test runs

With this model, we can analyze a particular code change to find all potentially impacted tests that transitively depend on modified files, and then estimate the probability of that test detecting a regression introduced by the change. Based on those estimates, the system selects the tests that are most likely to fail for a particular change. The diagram below shows which tests (shown in blue) would be chosen for a change affecting the two files from the previous example, where the likelihood of each considered is represented by a number between zero and one.

In sum, Our recommender system with reinforcement learning algorithms helps to identify the priority test scripts in runtime, the model helps to move up or down the test scripts from the sequences set at the beginning.

 
 

Outline/Structure of the Experience Report

1) Common pitfalls in test execution strategy
2) Selenium test traceability with App Source Code (Inspection Bot)
3) History based test case priorization schedules (Rule Engines)
4) Introduction to Reinforcement Learning Algorithm
a) Reward Functions
b) Prioritization using policy
c) State to Action (Memory representation)
5) Scheduling using Reinforcement agents
6) Integration with CI
7) Metrics, Results -> Recommender Systems
8) Our present state and future roadmap

Learning Outcome

The learning outcome would be the following:

  • Implementation of reinforcement algorithms in their existing test suites (of any language)
  • Automated collection of test results and learning to apply them to Machine learning.
  • Automated prioritization / exclusion of tests based on their historical information

In this can help them to go to the next state of test automation in their existing projects.

Target Audience

Anyone in IT industry

Prerequisites for Attendees

None in specific

schedule Submitted 1 year ago

Public Feedback

    help