Testing a Moving Target: How Do We Test Machine Learning and Adaptive Systems?
Testing is built on the logical foundation that for every given input, there is a defined and unique output. If I enter A, the system will always return B. What if software isn’t supposed to behave like that? Machine learning and adaptive systems are increasingly being used in ecommerce recommendation engines, predictive analytics, big data mining, and a host of other daily applications. And while some seem trivial, ecommerce sites are increasing their sales by sifting customer data to find relationships between products people buy. And we are using big data trends to determine the likelihood of failure in networks, automobiles, aircraft, and public transportation systems.
This presentation examines the challenges of testing systems that aren’t deterministic, that learn through experience, and adapt their results based on incoming data. It explains why traditional testing techniques can’t be used on these systems, and looks at strategies to test and measure quality. Last, it considers the challenges of determining system defects and how those defects might be analyzed and diagnosed.
Outline/structure of the Session
- What are machine learning and adaptive systems?
- How are these systems evaluated?
- Challenges in testing these systems
- What constitutes a bug?
- Summary and conclusions
- What kind of systems produce nondeterministic results
- Why we can’t test these systems using traditional techniques
- How we can assess, measure, and communicate quality with learning and adaptive systems
Data scientists, machine learning architects and developers, testers
A basic understanding of machine learning and adaptive systems, what technologies they use, and a high level understanding of how they are designed and built.