Member since 8 months
Rajni has worked in nearly every software development role - dev, test, DevOps, security, performance and program management in her career. Proficient in developing automation framework with high-quality services, performance engineering and developing a strategy for emerging technologies like blockchain, conversational AI, IoT, AR/VR. She is passionate not only about maximizing efficiency in her technical skills and in her strategies but also, about sharing best practices among colleagues and the tech world at large.
When a QA met reality technology: A strategy to test a reality-based application
Reality technologies, like augmented reality, virtual reality, and mixed reality technologies are amongst the top technologies that business is looking forward; and it is a phenomenal task to find an expert to test app developed based on these technologies.
There are a lot of platforms and tools available for reality-based application development, but what we lack is a standardized reality-based application testing strategy. Testing is performed on reality-based applications which differs a lot from the “traditional” testing which is a health hazard due to multi-dimensional environments and simulations that use vision, hearing and touch to interact with the artificial world.
As we know with the given immersive component of these technologies, lab testing and automation are completely ineffective. Therefore, a robust testing strategy is required to understand the current parameters for testing; a shift from traditional testing to immersive testing; and best practices to test are the need of the hour.
After testing numerous clients’ reality-based applications and our own internal applications by the CoE (center of excellence), we observed that it is impossible to anticipate and cover all the situations that can be experienced by end-users with a device and an application.
Traditional testing strategies are not working well for reality-based application – testers face new challenges in their daily work, while their experience and structured methods are becoming (to a certain degree) ineffective.
The world around us leveraging reality-based technology, the current situation, the opportunity to upgrade testers to test immersive technology effectively.
Although, testing new technologies and applications is always exciting, seeing their own strategies and tools fail in testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding and help them to apply their critical thinking to deal with uncertainty in their test objects. In addition, understanding the business needs for opportunities to optimize and innovate, and develop quality products. The fact is that the realm of reality technology is very different, therefore it needs specialist for testing these technologies.
End to end testing strategies for intelligently connected hybrid world of IoT
The intelligent mesh of devices is nothing but connecting millions of users, use cases, apps, and devices together to support application developed based on the internet of things. Currently, we see around the world that there are thousands of use cases, millions of apps, billions of users and trillions of things and if you think QA and testing in the interconnected intelligent devices then the scope would be very wide as verification and validation will be applicable in each interface and as and when grows.
I will talk about the challenges faced during IoT application testing and how it can go wrong. To thorough test all the area with the given challenges and IoT test lab is setup. What all are the solution to overcome the challenges. Some important aspects like continuous integration of hybrid environment and testing with multiple devices and with millions of use cases, to improve the existing conventional method with intelligent automation, and most important is scalability and security.
Although testing emerging technology and applications always is exciting, seeing their own strategies and tools fail in IoT testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding of connected systems like smart cities, connected enterprise and help them to apply their critical thinking to deal with uncertainty in their test objects.
Testing_Uncertainty for a chatbot
Uncertainty has always been a key challenge for testers - be it with the ambiguous definition of requirements or with unstable test environments. But testing a chatbot adds a completely new level of uncertainty to a tester’s life. There are a lot of platforms and tools available for chatbot development, but what we lack is a standardized chatbot testing strategy. The way testing is performed on chatbots differs a lot from the “traditional” testing (for example of an app or web portal) due to the apparent randomness of a conversation with a chatbot.
Testing numerous clients’ chatbots and our own chatbot, we experienced that it is impossible to anticipate and cover all the situations that can happen during a conversation with a chatbot. As we introduced learning components to chatbot (AI / machine learning, intent training), the chatbot evolved and changed its behavior compared to previous test runs. This increases the need for regression tests and complicates them at the same time. There is no limitation on user input- any user can type anything to a chatbot, so functionality, security, performance and exception handling need to be robust. Key areas for testing Chatbot were the conversational flow and the natural language processing model, as well as onboarding, personality, navigations, error management, speed and accuracy of the given answers. Chatting with chatbot, we learned the importance of real-time feedback in order to collect data about unexpected behavior and invalid data responses.
I will talk about the challenges faced during chatbot testing and how it can go wrong. We will address these challenges and suggest how they can be mitigated by different chatbot testing strategies. I will share our experience with commercial tools for chatbot testing, as well as using our own advanced automation framework with open source tools.
Traditional testing strategies are not working well for chatbots – testers face new challenges in their daily work, while their experience and structured methods are becoming (to a certain degree) ineffective. Although testing new technology and applications always is exciting, seeing their own strategies and tools fail in chatbot testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding of chatbots and help them to apply their critical thinking to deal with uncertainty in their test objects.
No more submissions exist.
No more submissions exist.