Reporting, Tracking & Analyzing Play store Reviews using automation
"Buggy App! Why don't you guys hire some good QA. Your app sucks."
How many of you have written a negative play store review? Did you get a response back? In how much time? Was your query resolved?
How would you like if you get a quick response to your comment (say within 15 mins). Your query is then tracked & you are provided a quick resolution. It would be fabulous. Right? Absolutely!
By using some Google & JIRA API's we have automated logging of these reviews into JIRA. For each negative review a JIRA ticket is created with all the details such as Author's Comment, Name, Star rating, App version, Link to original review & Device details such as model number, manufacturer, OS version & RAM. These details help in analyzing the issue at hand.
These reviews are then assigned to the concerned department who can take action on them & provide a resolution by corroborating the review details with internal system logs, verifying app functionality/crashes on that particular device type & reproducing the use case.
These reviews are also attributed to a Review Category by the support team which help us find out areas of concern or improvements for the app. We also try & see if the issue has been reported through other sources as well such as in-app feedback channels/issues reported through mails/call to determine if the problem mentioned in the review is specific to a user or is being faced by other users too. All of this helps us in providing a quick resolution to the user.
In this talk you will get to know all about how to build an ecosystem for app reviews by Reporting, Tracking & Analyzing them.
As a bonus for being a terrific audience, You will get to know about how you can reply to the users review on the play store, Automagically!
Outline/Structure of the Talk
1. Problem Statement
2. Implementation details and usage
3. Challenges faced. Resolved.
4. What are the benefits - for App, for users, for company.
Attendees leaving this session will have a better understanding of how to build an ecosystem for app reviews by reporting them through JIRA & Play Store API's and how to analyze & resolve the issue faster by integrating with existing processes & tools.
Anyone working on Android Applications
Prerequisites for Attendees
schedule Submitted 2 years ago
People who liked this proposal, also liked:
Tarun Narula / sandeep yadav - What to do when tests failTarun NarulaTechnical Test ManagerNaukri.comsandeep yadavLead Testing AnalystInfoedge
schedule 2 years agoSold Out!
What to do when your tests fail? Read on...
Functional automation is known to be flaky. A test passes sometimes & fail the other times. The failure can be attributed to multiple factors. We need to find out the root cause & then work towards fixing it for increasing automation tests reliability.
In this talk, we will not only be discussing causes that lead to a test failure, but we will also talk about prevention, early detection & fixing these failures for good.
We will discuss some common test failure causes such as locator changes, browser compatibility issues, coding bloopers etc
You will get to know how you can get alerted early about any test failures. We will be discussing topics such as running tests on under development builds for getting early feedback, triggering slack/SMS/email notifications with failure details for immediate redressal and many others.
You will get to know how to prevent failures by building robust locators, exception handling, making use of APIs for test data setup, building atomic tests, making use of waits, retrying your failed tests, rebuilding your Jenkins jobs automatically based upon a failure percentage threshold & so on.
At the end of this talk, you will be confident on how to deal with your failing tests!!
Sahib Babbar / amrit singh - An extensible architecture design for cross-platforms/technologies to maximise product automation coverageSahib BabbarModule Lead - Quality Assurance3Pillar Globalamrit singhSDETHIKE
schedule 2 years agoSold Out!
"Are you working for a product where you're struggling to automate the module(s) which you actually think can not be automated?"
Considering you have a product running on different browsers, different OS and/or different platforms and for this, you have written automation scripts in different technologies. So, following are the pain points which we are going through: -
Automation candidates test-scripts can’t be automated
Lower automation coverage
Lesser auto-tests => more manual work.
Lessor auto-tests => more regression time.
No with-in sprint automation.
Urge for the quick solution to cater to cross-technology and cross-platform automation scripts.
So, based on the above points, we came up with the automation architecture in such a way which serves the purpose in the following manner:
- Able to add more automation coverage in another layer of testing.
- Able to make a call from any platform (mobile or web) with any technology like Java, Swift, C#, etc.
- More automation coverage = Lesser manual effort during regression or Smoke testing.
Extensible Services Components:
- Application: Responsibility: Application is the main Spring Boot Application and has the following responsibilities:
- Triggers auto-configuration
- Component scanning
- Controllers: Responsibility: Controllers are the Rest Controllers has the following responsibilities:
- For restful services creation for CRUD operations
- All the requests and response from the external automation framework is being catered via controllers
- Services: Responsibility: Services is the Business Logic layer where the CRUD operations can be performed based on the extended web-service calls. These operations can be performed on the product’s existing APIs and/or database using the data-access layer, below are the key responsibilities:
- Business use-case based CRUD operations on the product database
- Business use-case based CRUD operations on product existing web-services
- Authentication of the product can be done internally via services based on credentials supplied from the controller params.
- DAO: Responsibility: DAO is the Data Access Layer where the CRUD operations can be performed on the product database.
Some of the actual problems and the solution:
Problem#1: Alerts - Dismiss for Today flow
We have to wait for 24h to verify that the alerts section is displayed again
Cannot run the same test on both platforms (web/mobile)
Cannot run the same test twice in the same day
Solution: Unsupress the suppressed dialogue box by calling the extensive API can resolve the problem so that on re-run the automation it would not fail.
Problem#2: Tasks read status, once selecting a task, it is marked as read and cannot check it again.
Solution: Unread the task with simple extensive API call with taskId as a parameter
Implementation and usage
We have the implementation done on a product, and is utilized by 4 external automation frameworks, as below:
- API Automation Framework (Java)
- Web Automation Framework (.Net)
- Android Mobile Automation Framework (Java)
- iOS Mobile Automation Framework (Swift)
We can walk through with the implementation approach, as a
- Producer: How we actually produce the HTTP response for a test scenario via API call requested by the consumers.
- Consumer: How our consumers (external automation framework) can get the HTTP Response, they can use at their end to get the work done.
Time break-up and the speakers:
- Introduction about the case-study 3min - Sahib
- Problem and solution 5min - Amrit
- Architecture diagram walk-through 5min - Sahib
- Demo 5min - Amrit (running the demo) / Sahib (explaining the demo)
- Q/A 2min (Both)
amrit singh - Is “FLAKINESS” hampering your test automation execution?? No Worries “AI (test.ai)” is here.amrit singhSDETHIKE
schedule 2 years agoSold Out!
Every time you see flaky scripts you keep wondering what should I do .Should I change my locator strategy or should I use “Thread.sleep(“wait a minute should i really use this.. A big pause in your mind.”)” . Trust me flaky scripts are worst nightmares.
So here I will be sharing my journey how I have used test.ai in appium automation scripts and how I have converted my flaky scripts to green colour .
Apart from this I will also be talking about how you can integrate this in your appium automation framework.And how you can train this AI plugin according to your need.
As a bonus those who hate flakiness I will be talking about some limitations of this and where not to use in your scripts.