Uncertainty has always been a key challenge for testers - be it with the ambiguous definition of requirements or with unstable test environments. But testing a chatbot adds a completely new level of uncertainty to a tester’s life. There are a lot of platforms and tools available for chatbot development, but what we lack is a standardized chatbot testing strategy. The way testing is performed on chatbots differs a lot from the “traditional” testing (for example of an app or web portal) due to the apparent randomness of a conversation with a chatbot.

Testing numerous clients’ chatbots and our own chatbot, we experienced that it is impossible to anticipate and cover all the situations that can happen during a conversation with a chatbot. As we introduced learning components to chatbot (AI / machine learning, intent training), the chatbot evolved and changed its behavior compared to previous test runs. This increases the need for regression tests and complicates them at the same time. There is no limitation on user input- any user can type anything to a chatbot, so functionality, security, performance and exception handling need to be robust. Key areas for testing Chatbot were the conversational flow and the natural language processing model, as well as onboarding, personality, navigations, error management, speed and accuracy of the given answers. Chatting with chatbot, we learned the importance of real-time feedback in order to collect data about unexpected behavior and invalid data responses.

I will talk about the challenges faced during chatbot testing and how it can go wrong. We will address these challenges and suggest how they can be mitigated by different chatbot testing strategies. I will share our experience with commercial tools for chatbot testing, as well as using our own advanced automation framework with open source tools.

Traditional testing strategies are not working well for chatbots – testers face new challenges in their daily work, while their experience and structured methods are becoming (to a certain degree) ineffective. Although testing new technology and applications always is exciting, seeing their own strategies and tools fail in chatbot testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding of chatbots and help them to apply their critical thinking to deal with uncertainty in their test objects.


Outline/Structure of the Demonstration

  1. The current requirement of conversational ai and the importance of testing
  2. Challenges of conversational ai testing
  3. Challenges while actual implementation and testing
  4. Testing uncertainty in chatbots needs new testing strategies.
  5. Scope of testing for chatbot
  6. Test automation demo for chatbot testing with open source and commercial tools.
  7. KPI's measures and standards
  8. Advanced security implementation testing
  9. Testers have a new view on testing uncertainty and are ready to create a well-defined chatbot testing strategy for effective end-to-end testing of chatbots.







Q&A -5 MIN

Learning Outcome

At the end of the session, attendees will learn about:

1. End-to-end test automation framework for testing conversational ai and NLP model.

2. Utility to auto-generate test cases as conversations using the open-source tool

3. What all KPI's and standards need to follow

4. Advanced security measures to ensure chatbot security

5. Domain-specific reusable conversational ai test case generation

Target Audience

QA / Test Automation engineers who want to end-to-end test automation framework for a chatbot. Developer for advanced security measures understanding. Project manager to understand scope of testing for chatbot

Prerequisites for Attendees

There are no prerequisites for this session.


schedule Submitted 2 years ago

Public Feedback