What to do when tests fail
What to do when your tests fail? Read on...
Functional automation is known to be flaky. A test passes sometimes & fail the other times. The failure can be attributed to multiple factors. We need to find out the root cause & then work towards fixing it for increasing automation tests reliability.
In this talk, we will not only be discussing causes that lead to a test failure, but we will also talk about prevention, early detection & fixing these failures for good.
We will discuss some common test failure causes such as locator changes, browser compatibility issues, coding bloopers etc
You will get to know how you can get alerted early about any test failures. We will be discussing topics such as running tests on under development builds for getting early feedback, triggering slack/SMS/email notifications with failure details for immediate redressal and many others.
You will get to know how to prevent failures by building robust locators, exception handling, making use of APIs for test data setup, building atomic tests, making use of waits, retrying your failed tests, rebuilding your Jenkins jobs automatically based upon a failure percentage threshold & so on.
At the end of this talk, you will be confident on how to deal with your failing tests!!
Outline/Structure of the Talk
1. Problem Statement [5 minutes]: We will be discussing about how flaky tests negatively impact the time, money & the trust quotient in respect to functional UI test automation. We will give an overview of what are the possible causes of test failure, how we can fix them at an early stage and how to prevent these causes in future.
2. Test failure causes [10 minutes]: In this, we will cover the reasons for test case failures such as locator changes, browser compatibility issues, coding blunders, test infra issues, failures due to dynamic applications, concurrent tests & issues related to test data to cover the possible causes of flaky tests.
3. How to reduce flaky tests & prevent undesired test failures [10 minutes]: Will discuss the strategies for reducing flaky tests and their prevention by using best practices such as:
- Building robust locators
- Exception handling
- Making use of APIs for test data setup
- Building atomic tests
- Making use of waits
- Retrying your failed tests
- Rebuilding your Jenkins jobs automatically based upon a test failure percentage threshold
4. Fail fast. Fix faster. [10 minutes]: This section will cover the following topics:
- Benefits of running tests on under development build to identify the impact on existing automation scripts & to find real issues early.
- Triggering Slack/SMS/email notifications with failure details for immediate redressal.
- Benefits of building a visualization of test executions over time.
- Detailed reports to pinpoint the exact failure cause
5. Learning Outcome [5 minutes]: Will be summarizing the learnings from the session & will discuss the benefits that we can achieve by following these best practices. At the end of the session, you will have a sound understanding about the causes of test failures, their prevention & fixes, along with ways to detect any issues early in the cycle that helps to build robust automation aimed at making quality releases.
6. Q&A [5 minutes]
Attendees leaving this session will have a better understanding of some common test failure causes. They will know how to be alerted about failures at the earliest in a way that helps in quick identification of the root cause behind the failures. Most importantly they will have knowledge on how to design trustworthy robust scripts instead of flaky ones.
Everyone involved in automated testing
Prerequisites for Attendees
Basic understanding of automation testing will be good to have.
schedule Submitted 10 months ago
People who liked this proposal, also liked:
Tarun Narula / Surbhit Aggarwal - Reporting, Tracking & Analyzing Play store Reviews using automationTarun NarulaTechnical Test ManagerNaukri.comSurbhit AggarwalLead Testing AnalystNaukri.com
schedule 10 months agoSold Out!
"Buggy App! Why don't you guys hire some good QA. Your app sucks."
How many of you have written a negative play store review? Did you get a response back? In how much time? Was your query resolved?
How would you like if you get a quick response to your comment (say within 15 mins). Your query is then tracked & you are provided a quick resolution. It would be fabulous. Right? Absolutely!
By using some Google & JIRA API's we have automated logging of these reviews into JIRA. For each negative review a JIRA ticket is created with all the details such as Author's Comment, Name, Star rating, App version, Link to original review & Device details such as model number, manufacturer, OS version & RAM. These details help in analyzing the issue at hand.
These reviews are then assigned to the concerned department who can take action on them & provide a resolution by corroborating the review details with internal system logs, verifying app functionality/crashes on that particular device type & reproducing the use case.
These reviews are also attributed to a Review Category by the support team which help us find out areas of concern or improvements for the app. We also try & see if the issue has been reported through other sources as well such as in-app feedback channels/issues reported through mails/call to determine if the problem mentioned in the review is specific to a user or is being faced by other users too. All of this helps us in providing a quick resolution to the user.
In this talk you will get to know all about how to build an ecosystem for app reviews by Reporting, Tracking & Analyzing them.
As a bonus for being a terrific audience, You will get to know about how you can reply to the users review on the play store, Automagically!
Sahib Babbar / Amrit Pal Singh - An extensible architecture design for cross-platforms/technologies to maximise product automation coverageSahib BabbarModule Lead - Quality Assurance3Pillar GlobalAmrit Pal SinghSenior QA3 pillar global
schedule 10 months agoSold Out!
"Are you working for a product where you're struggling to automate the module(s) which you actually think can not be automated?"
Considering you have a product running on different browsers, different OS and/or different platforms and for this, you have written automation scripts in different technologies. So, following are the pain points which we are going through: -
Automation candidates test-scripts can’t be automated
Lower automation coverage
Lesser auto-tests => more manual work.
Lessor auto-tests => more regression time.
No with-in sprint automation.
Urge for the quick solution to cater to cross-technology and cross-platform automation scripts.
So, based on the above points, we came up with the automation architecture in such a way which serves the purpose in the following manner:
- Able to add more automation coverage in another layer of testing.
- Able to make a call from any platform (mobile or web) with any technology like Java, Swift, C#, etc.
- More automation coverage = Lesser manual effort during regression or Smoke testing.
Extensible Services Components:
- Application: Responsibility: Application is the main Spring Boot Application and has the following responsibilities:
- Triggers auto-configuration
- Component scanning
- Controllers: Responsibility: Controllers are the Rest Controllers has the following responsibilities:
- For restful services creation for CRUD operations
- All the requests and response from the external automation framework is being catered via controllers
- Services: Responsibility: Services is the Business Logic layer where the CRUD operations can be performed based on the extended web-service calls. These operations can be performed on the product’s existing APIs and/or database using the data-access layer, below are the key responsibilities:
- Business use-case based CRUD operations on the product database
- Business use-case based CRUD operations on product existing web-services
- Authentication of the product can be done internally via services based on credentials supplied from the controller params.
- DAO: Responsibility: DAO is the Data Access Layer where the CRUD operations can be performed on the product database.
Some of the actual problems and the solution:
Problem#1: Alerts - Dismiss for Today flow
We have to wait for 24h to verify that the alerts section is displayed again
Cannot run the same test on both platforms (web/mobile)
Cannot run the same test twice in the same day
Solution: Unsupress the suppressed dialogue box by calling the extensive API can resolve the problem so that on re-run the automation it would not fail.
Problem#2: Tasks read status, once selecting a task, it is marked as read and cannot check it again.
Solution: Unread the task with simple extensive API call with taskId as a parameter
Implementation and usage
We have the implementation done on a product, and is utilized by 4 external automation frameworks, as below:
- API Automation Framework (Java)
- Web Automation Framework (.Net)
- Android Mobile Automation Framework (Java)
- iOS Mobile Automation Framework (Swift)
We can walk through with the implementation approach, as a
- Producer: How we actually produce the HTTP response for a test scenario via API call requested by the consumers.
- Consumer: How our consumers (external automation framework) can get the HTTP Response, they can use at their end to get the work done.
Time break-up and the speakers:
- Introduction about the case-study 3min - Sahib
- Problem and solution 5min - Amrit
- Architecture diagram walk-through 5min - Sahib
- Demo 5min - Amrit (running the demo) / Sahib (explaining the demo)
- Q/A 2min (Both)
Amrit Pal Singh - Is “FLAKINESS” hampering your test automation execution?? No Worries “AI (test.ai)” is here.Amrit Pal SinghSenior QA3 pillar global
schedule 10 months agoSold Out!
Every time you see flaky scripts you keep wondering what should I do .Should I change my locator strategy or should I use “Thread.sleep(“wait a minute should i really use this.. A big pause in your mind.”)” . Trust me flaky scripts are worst nightmares.
So here I will be sharing my journey how I have used test.ai in appium automation scripts and how I have converted my flaky scripts to green colour .
Apart from this I will also be talking about how you can integrate this in your appium automation framework.And how you can train this AI plugin according to your need.
As a bonus those who hate flakiness I will be talking about some limitations of this and where not to use in your scripts.