Deciphering the way data is tested : Automate the movement, transformation & visualization of data
What is the quality of data?
Is it good enough to be collected, consumed, and interpreted for business usage?
And how should we use this data?
Many more question when a tester involve in testing application with big data, AI, IoT and analytical solution.
Ambiguity has always been a key challenge for testers - be it with the ambiguous definition of requirements or with unstable test environments. But testing a data, big data workflow adds a completely new level of uncertainty to a tester’s life for modern technologies.
Data Validation is simply verifying the correctness of data. The Big Data Testing Pipeline consists of horizontal workflows where data transformations occur continuously, managing a series of steps that process and transform the data. The obtained result can be settled into a database for analysis(Machine Learning Models, BI reports) or act as an input to other workflows.
This session is to provide a solution to challenges faced while data testing for an application (with big data, IoT, a mesh of devices, artificially intelligent algorithms) and with data analytics, like:
- Lack of technical expertise and coordination
- Heterogeneous data format
- Inadequacy of data anomaly identification
- Huge data sets and a real-time stream of data
- Understanding the data sentiment
- Continuous testing and monitoring
The research employed an open-source solution for the implementation. Apache Kafka was used to gathering Batch data and Streaming data (Sensor/Logs). Apache Spark Streaming consumed the data from Kafka in Realtime and carried out the validations in Spark Engine. Further in the workflow, the data was stored in Apache Cassandra and then configured in Elasticsearch and Logstash to generate real-time reports/graphs in Kibana. The proposed tool is generic as well as highly configurable to incorporate any open-source tool you require to work within streaming, processing, or storing the data. The system includes configuration files where every single detail of the dependent tool used is appended and can be modified according to the needs.
This solution aims to analyze various Key performance indicators for Big Data like data health check, downtime, time-to-market as well as throughput, and response time. The tool can be considered as a pluggable solution that can efficiently drive big data testing and uplift the data quality for further usage.
Attend this session to understand the basic need of future application testing.
- Understanding of data and importance of data quality
- Why automation is an essential strategy for data testing.
- Vertical continuous flow for data and the horizontal flow of data in the pipeline.
- Potential solution demo with an implemented use case for real-time experience
- Generic code will be shared with attendees for enhancement.
- KPI's consideration for data validation.
Outline/Structure of the Demonstration
1. Introduction - 2 min
2. Problem Statement - 2 min
3. Proposed Solution - 3 min
4. Design & Architecture - 5 min
5. Implementation - 20 min
6. Exceptions handling - 3 min
7. KPI's consideration - 5 min
Q&A - 5 min
Challenges in current technology trend
Proposed solution and exception handling
KPI's and metrics
Developer & testers
Prerequisites for Attendees
schedule Submitted 1 year ago
People who liked this proposal, also liked:
Ragavan Ambighananthan - Why cross browser and device platforms are ripe for disruption?Ragavan AmbighananthanPrincipal Software Engineer in TestExpedia Group
schedule 1 year agoSold Out!
Goal:To scale desktop/mobile web/app test automation in a cost-effective way that would meet the demands of good software development design patterns like Shift Left, at the same time, be cost effective.
Problem Statement:Current cross browser/device platforms are not built to handle the real scalability that software development design patterns require, in a cost-efficient way.
Expensive parallel connection limit:Most or all cross browser platforms, offers their services based on the number of parallel connections.
Shift Left and Scalability:Problem with current approach followed by these platforms is that it is not aligned to software development best practices like Shift Left.With Shift Left, automation suite would run for every commit in a branch of a project, the number of tests running at any point in time is significantly high. Again with this being repeated by many teams, within their own CICD pipeline, across an organisation,, the demand for parallel connections increases exponentially. The cost to support this using current cross browser and cross device approach is astronomical.
Restricted Real Device Usage:Most cross browser and cross device platforms' primary support are around real device and not emulators or simulators. With real devices, there is restriction on the number of concurrent tests you can run, based on device types and tiers, even though you may have higher parallel connections purchased. This is due to limited number of real devices a platform has and having to share it with 1000s of customers on demand.
Data Center vs Cloud:Fourth reason is, most these platforms rely on data centres, instead of Cloud. Hence their ability to dynamically scale desktop/mobile automaton infrastructure is very limited.
Desktop is still king of conversion:Fifth, as much as we would like to think that mobile web is matured, conversation rate in mobile is the lowest compared to Desktop/Tablet as per Akamai. It could be due to many reasons, including performance, usability, etc. This also means that, since the conversation is more in desktop, importance of testing on desktop browsers is still by far more.
In App Browser Testing:Sixth, In App Browser is a new trend, where you might have tested your mobile web on different browsers, but it will probably mostly viewed in an In App Browser like Facebook In App Browser (When you open a site in Facebook, it opens it within Facebook's realm instead of on a browser). Even though In App browser uses Chrome or Safari, many users are complaining, that their site is broken when viewed via In App. Companies like Facebook / LinkedIn would like to keep you within their realm so they can track you, hence your mobile web site's experience should also be tested in these In App Browsers. 2018/2019 Facebook In App Browser usage, showed 42% increase as per Akamai.This means, you have to test your mobile site in In App Browsers as well.
Emerging Country Specific Browser:Revenue generation percentage for international eCommerce companies is traditionally very high (more than 50%) from U.S, but this is changing where it is normal for a company's ~50-60%% revenue to come from non-US markets. This is also another reason to look at local browser usage habits.Chromium based Cốc Cốc browser is used by 25 million people in Vietnam.UC Browser and QQ Browser are number 2 and 3 in China and UC Browser is number 2 in India.Yandex is number 3 popular browser in Russian Federation. Just these 4 countries alone has around 2.5 billion people.These are many of the problems with the current Cross Browser / Device Platforms.a) Expensive parallel connections, b) Limited scalability thats not suitable for good SDLC design patterns c) Real device restriction d) Data centre limitation e) New use cases, increasing scope and frequency of testing f) New region specific browsers
New Opportunities - Evolution of TechnologiesNow let us look at what has changed in terms of technology that could take us away from the above problems.a) AWS mac1.metal (Mac Mini computers) - AWS, for the first time, providing scalable Mac minis. Even though auto scaling is not supported yet, this can be used as a scalable solution to build OS X Safari, iOS Simulator at scale, for automation.b) Many companies providing "Mac Mini Cloud" including Apple XCode Cloud (beta) for device testing.c) With AWS bare metal instances, you can scale Android Emulators to any limit.d) With legacy IE discontinuing in June 2022, one less browser to worry about.e) Proprietary solutions like MacStadium Okra which allows to run OS X as a container in K8s is bound to change the game.
Solution:Going by Mobile Test Pyramid, bottom most layer uses desktop browser to emulate mobile devices' break points, to test the responsiveness of the pages, example would be Chrome Emulator. Scalable solution to this can be implemented using cloud providers like AWS / K8s combined. Second layer of the Mobile Test Pyramid uses Android Emulator/iOS Simulator, again with AWS/GCP/Azure and other OS X cloud providers, iOS Simulator/OS X Safari/Android Emulator can be implemented at scale. Most of the use cases of mobile web can be tested on emulators/simulators and can be implemented at scale using cloud providers, mobile apps may have some exceptions. For mobile web testing, there is no need to test bluetooth, gps, battery drain, calling, etc The top layer of the pyramid, real devices can be used from cross browser platforms to do sanity cases, thus keeping the cost down.
Wim Selles - Swiping your way through AppiumWim SellesSr. Solutions ArchitectSauce Labs
schedule 1 year agoSold Out!
Mobile applications are becoming more and more important in our daily lives. From ordering clothes to grocery shopping, the services available via an app are increasing rapidly and users expect a seamless experience. This means that the automation focus is shifting more towards mobile devices.
But did you know that there is a huge difference between interacting with a desktop browser and a mobile app? And that difference is just a few tiny hand motions! Because with desktop browser automation we mainly focus on using our mouse, but on devices, we use our fingers to execute all different kinds of gestures, like swipe, scroll, pinch/zoom, and many more. Did you also know that automating mobile gestures is one of the most overlooked features in mobile automation?The most common reason for this could be that we don’t know how to do it, or because it might just be too difficult.
During this presentation, we will focus on how to mimic mobile gestures with Appium for Android and iOS. With a sample app we will explore and understand different gestures including how to scroll, swipe, and pinch/zoom and then create cross-platform and cross-device gestures. By the end of this presentation, you’ll learn how to improve the user experience of your mobile applications by adding gestures to your automation scripts.
Dmytro Budym - Mobile automation infrastructure from scratchDmytro BudymSoftware Development Engineer in TestairSlate
schedule 1 year agoSold Out!
Mobile automation is very challenging from select testing framework to preparing infrastructure.
Where will you run test? Emulator or real device, cloud platform or local machine?
Today I want to show how to build android and ios emulator clusters to run tests with appium.
- for android we will use Selenoid which automatically runs container with emulator
- for ios we will use Selenium grid and connect appium servers on macs as nodes
So now you can forget about passing UDID to your tests. Just have one entry point per platform. Put host to remote driver and runt tests.