Automating The Known Unknown
A common root cause of failures in end-2-end (e2e) automated tests is unpredictable backend API behaviour. This results in the test executions being flaky. But if you think about this, you also should be testing the impact of such unpredictable backend API behaviour on the app under test.
The question then remains — how do you implement a deterministic e2e test that validates how the app handles such unpredictable situations.
In this talk, we will take you on the journey in which a team at Jio created a testing ecosystem for running e2e tests against our mobile app using Appium and at relevant points in the flow, we set dynamic contextual expectations on the Specmatic stub to simulate the desired behaviours.
Learning Outcome
- Setup an environment that allows you to focus on your system components, while stubbing out external systems
- Test the expected, and (known) unexpected scenarios without having to actually changing the state of the external system — the positive, negative, edge-cases, boundary-values, etc. types of test scenarios
- Dynamically control the external APIs responses using Specmatic to increase test coverage and shift left
- Use Specmatic with OpenAPI contracts to increase test coverage catch integration errors before integration environment
Target Audience
Quality Engineers, Tech Leads
Video
schedule Submitted 1 year ago
People who liked this proposal, also liked:
-
keyboard_arrow_down
Shannon Lee - How AI Can Act as Top Layer of Appium Automation Runs to Analyze Text and Visual Assertions from a Baseline Manual Session
20 Mins
Talk
Beginner
Appium proves to hold a lot of value in the effort of achieving mobile application automation. Appium allows more granular control in automation efforts compared to scriptless automation. Though, to programmatically code text and visual assertions in appium scripts can be tedious and time-consuming, along with yielding not-so-ideal script bloat that can be difficult to maintain as an application changes from release to release. By applying an AI-driven engine on top of appium automation and providing a baseline session, AI can return text and visual discrepancies far faster than the original, programmatic attempt with less effort and maintenance. This allows quality engineers to not only catch more defects within their automation, but also allow them to automate additional tests quicker while still gaining the advantages of appium automation.
-
keyboard_arrow_down
parveen khan - A peek into observability from testers lens
45 Mins
Case Study
Beginner
It is common yet quite new to hear the term ‘Observability’. But what does that mean?
Is it just another new acronym for monitoring?
In the current era, organizations are building applications with more complex architectures such as - blockchain, distributed systems, and microservices. The job of maintaining these systems and ensuring it is working as expected has become a challenging task. Gone are the days where testers have to rely on the UI’s to validate an application. Now it is all about what happens underneath the hood. I worked on a distributed system where no one had any idea of what was going on and why there were production issues. We had some monitoring and logging in place, but we had no clue where, how, and what to look out for whenever there was a problem.
Join this session, where I discuss my journey with Observability. I will share how I discovered various insights about my system by using this approach. How I learned this technique and implemented it within my engineering team.
-
keyboard_arrow_down
parveen khan - Let's Shift left and right in the microservices world
45 Mins
Case Study
Beginner
In the current era, organizations are building applications with more complex architectures such as - blockchain, distributed systems, and microservices. The job of maintaining these systems and ensuring it is working as expected has become a challenging task. Gone are the days where testers have to rely on the UI’s to validate an application. Now it is all about what happens underneath the hood and how far you shift testing. I worked on a team where we followed DevOps and started testing as early as possible by shifting left. But that wasn’t enough in terms of the quality of the product or keeping our users happy. That’s where we changed our process and started taking smaller steps into shifting right.
We had some monitoring and logging in place, but we had no clue where, how, and what to look out for whenever there was a problem.
Join this session, where I discuss my journey with shift left and shift right approach. I will share how using this approach has been helpful to our team.