
Rajni Singh
Specialises In
Rajni has worked in nearly every software development role - dev, test, DevOps, security, performance and program management in her career. Proficient in developing automation framework with high-quality services, performance engineering and developing a strategy for emerging technologies like blockchain, conversational AI, IoT, AR/VR. She is passionate not only about maximizing efficiency in her technical skills and in her strategies but also, about sharing best practices among colleagues and the tech world at large.
-
keyboard_arrow_down
Deciphering the way data is tested : Automate the movement, transformation & visualization of data
45 Mins
Demonstration
Beginner
What is the quality of data?
Is it good enough to be collected, consumed, and interpreted for business usage?
And how should we use this data?
Many more question when a tester involve in testing application with big data, AI, IoT and analytical solution.
Ambiguity has always been a key challenge for testers - be it with the ambiguous definition of requirements or with unstable test environments. But testing a data, big data workflow adds a completely new level of uncertainty to a tester’s life for modern technologies.
Data Validation is simply verifying the correctness of data. The Big Data Testing Pipeline consists of horizontal workflows where data transformations occur continuously, managing a series of steps that process and transform the data. The obtained result can be settled into a database for analysis(Machine Learning Models, BI reports) or act as an input to other workflows.
This session is to provide a solution to challenges faced while data testing for an application (with big data, IoT, a mesh of devices, artificially intelligent algorithms) and with data analytics, like:
- Lack of technical expertise and coordination
- Heterogeneous data format
- Inadequacy of data anomaly identification
- Huge data sets and a real-time stream of data
- Understanding the data sentiment
- Continuous testing and monitoring
The research employed an open-source solution for the implementation. Apache Kafka was used to gathering Batch data and Streaming data (Sensor/Logs). Apache Spark Streaming consumed the data from Kafka in Realtime and carried out the validations in Spark Engine. Further in the workflow, the data was stored in Apache Cassandra and then configured in Elasticsearch and Logstash to generate real-time reports/graphs in Kibana. The proposed tool is generic as well as highly configurable to incorporate any open-source tool you require to work within streaming, processing, or storing the data. The system includes configuration files where every single detail of the dependent tool used is appended and can be modified according to the needs.
This solution aims to analyze various Key performance indicators for Big Data like data health check, downtime, time-to-market as well as throughput, and response time. The tool can be considered as a pluggable solution that can efficiently drive big data testing and uplift the data quality for further usage.
Attend this session to understand the basic need of future application testing.
- Understanding of data and importance of data quality
- Why automation is an essential strategy for data testing.
- Vertical continuous flow for data and the horizontal flow of data in the pipeline.
- Potential solution demo with an implemented use case for real-time experience
- Generic code will be shared with attendees for enhancement.
- KPI's consideration for data validation.
-
keyboard_arrow_down
When a QA met reality technology: A strategy to test a reality-based application
45 Mins
Tutorial
Intermediate
Reality technologies, like augmented reality, virtual reality, and mixed reality technologies are amongst the top technologies that business is looking forward; and it is a phenomenal task to find an expert to test app developed based on these technologies.
There are a lot of platforms and tools available for reality-based application development, but what we lack is a standardized reality-based application testing strategy. Testing is performed on reality-based applications which differs a lot from the “traditional” testing which is a health hazard due to multi-dimensional environments and simulations that use vision, hearing and touch to interact with the artificial world.
As we know with the given immersive component of these technologies, lab testing and automation are completely ineffective. Therefore, a robust testing strategy is required to understand the current parameters for testing; a shift from traditional testing to immersive testing; and best practices to test are the need of the hour.
After testing numerous clients’ reality-based applications and our own internal applications by the CoE (center of excellence), we observed that it is impossible to anticipate and cover all the situations that can be experienced by end-users with a device and an application.
Traditional testing strategies are not working well for reality-based application – testers face new challenges in their daily work, while their experience and structured methods are becoming (to a certain degree) ineffective.
The world around us leveraging reality-based technology, the current situation, the opportunity to upgrade testers to test immersive technology effectively.
Although, testing new technologies and applications is always exciting, seeing their own strategies and tools fail in testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding and help them to apply their critical thinking to deal with uncertainty in their test objects. In addition, understanding the business needs for opportunities to optimize and innovate, and develop quality products. The fact is that the realm of reality technology is very different, therefore it needs specialist for testing these technologies.
-
keyboard_arrow_down
End to end testing strategies for intelligently connected hybrid world of IoT
45 Mins
Tutorial
Intermediate
The intelligent mesh of devices is nothing but connecting millions of users, use cases, apps, and devices together to support application developed based on the internet of things. Currently, we see around the world that there are thousands of use cases, millions of apps, billions of users and trillions of things and if you think QA and testing in the interconnected intelligent devices then the scope would be very wide as verification and validation will be applicable in each interface and as and when grows.
I will talk about the challenges faced during IoT application testing and how it can go wrong. To thorough test all the area with the given challenges and IoT test lab is setup. What all are the solution to overcome the challenges. Some important aspects like continuous integration of hybrid environment and testing with multiple devices and with millions of use cases, to improve the existing conventional method with intelligent automation, and most important is scalability and security.Although testing emerging technology and applications always is exciting, seeing their own strategies and tools fail in IoT testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding of connected systems like smart cities, connected enterprise and help them to apply their critical thinking to deal with uncertainty in their test objects.
-
keyboard_arrow_down
Testing_Uncertainty for a chatbot
45 Mins
Demonstration
Intermediate
Uncertainty has always been a key challenge for testers - be it with the ambiguous definition of requirements or with unstable test environments. But testing a chatbot adds a completely new level of uncertainty to a tester’s life. There are a lot of platforms and tools available for chatbot development, but what we lack is a standardized chatbot testing strategy. The way testing is performed on chatbots differs a lot from the “traditional” testing (for example of an app or web portal) due to the apparent randomness of a conversation with a chatbot.
Testing numerous clients’ chatbots and our own chatbot, we experienced that it is impossible to anticipate and cover all the situations that can happen during a conversation with a chatbot. As we introduced learning components to chatbot (AI / machine learning, intent training), the chatbot evolved and changed its behavior compared to previous test runs. This increases the need for regression tests and complicates them at the same time. There is no limitation on user input- any user can type anything to a chatbot, so functionality, security, performance and exception handling need to be robust. Key areas for testing Chatbot were the conversational flow and the natural language processing model, as well as onboarding, personality, navigations, error management, speed and accuracy of the given answers. Chatting with chatbot, we learned the importance of real-time feedback in order to collect data about unexpected behavior and invalid data responses.
I will talk about the challenges faced during chatbot testing and how it can go wrong. We will address these challenges and suggest how they can be mitigated by different chatbot testing strategies. I will share our experience with commercial tools for chatbot testing, as well as using our own advanced automation framework with open source tools.
Traditional testing strategies are not working well for chatbots – testers face new challenges in their daily work, while their experience and structured methods are becoming (to a certain degree) ineffective. Although testing new technology and applications always is exciting, seeing their own strategies and tools fail in chatbot testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding of chatbots and help them to apply their critical thinking to deal with uncertainty in their test objects.
-
No more submissions exist.
-
No more submissions exist.