location_city Online schedule Nov 19th 11:30 AM - 01:00 PM IST place Zoom people 54 Interested

The Test Automation Pyramid is not a new concept.


The top of the pyramid is our UI / end-2-end functional tests - which simulate end-user behavior and interactions with the product-under test.

While Automation helps validate functionality of your product, aspects of UX validations can only be seen and captured by the human eye and is hence mostly a manual activity. This is an area where AI & ML can truly help.

With everyone wanting to be Agile, make quick releases, the look & feel / UX validation, which is typically a a manual, slow, and error-prone activity, quickly becomes a huge bottleneck.

In addition, with any UX related issues propping up cause huge brand-value and revenue loss, may lead to social-trolling and worse - dilute your user-base.

In this hands-on workshop, using numerous examples, we will explore:

  • Why Automated Visual Validation is essential to be part of your Test Strategy
  • How Visual AI increases the coverage of your functional testing, while reducing the code, and increasing stability of your automated tests
  • Potential solutions / options for Automated Visual Testing, with pros & cons of each
  • How an AI-powered tool, Applitools Eyes, can solve this problem
  • Hands-on look at Applitools Visual AI and how to get started using it

Outline/Structure of the Workshop

  • Importance of Visual Testing – 5 min
    • Domains
    • Platforms
  • The 3 challenges of Visual Testing – 2 min
  • Visual Testing Techniques – 3 min
    • Pixel comparison
    • AI-based comparison
  • Getting started with Applitools Visual AI – 45 min
    • Run your basic Selenium / Appium test – without Visual AI – 10 min
    • Integrate Applitools Visual AI in your automation – 5 min
    • Run your Selenium / Appium test with Applitools Visual AI – 10 min
    • Customising your visual validation – fluent apis – 15 min
    • Custom reports with Visual AI results – 5 min
    • Q&A & Debugging
  • Scaling your Automation – Applitools Ultrafast Grid – 15 min
    • Limitations of current scaling / cross-browser testing approach – 5 min
    • Introduction to Applitools Ultrafast Grid – 5 min
    • Demo – 5 min
  • Advanced features – 10 min
    • A/B testing
    • Contrast Advisor
    • Insights
  • Q&A – 10 min

Learning Outcome

  • Examine the importance and need of Visual Testing
  • Use Visual AI to increase the coverage of your functional testing, while reducing the code, and increasing stability of your automated tests
  • Execute Visual Testing using Applitools Visual AI as part of your Functional Test Automation (Web / Mobile-Web / Native Apps), and CI pipelines
  • Practise different Applitools’ Visual AI capabilities to make customise the automated tests based on the context of your application

Target Audience

Developers, QA, SDET, Automation Engineers

Prerequisites for Attendees

  • This is a hands-on workshop and requires participants to run automated tests using Selenium WebDriver / Appium.
  • Sample code and setup instructions will be provided before the workshop to help get started.
  • If you have your own test framework, you could maybe use that instead of the sample we will use for the workshop.

Machine setup instructions


Please run the following commands on your laptops to ensure connectivity to the Applitools server.

The response status code for each of these methods should be 2xx / 3xx. 

Instructions for Windows OS:

Run the following commands in PowerShell window and note the response status code:

If they get an error in the console / terminal window with message such as FORBIDDEN / ACCESS DENIED / PROXY ERROR / etc., then they should try the same commands by providing the proxy details:

NOTE: Based on your network configuration, the -ProxyCredential parameter may need to be specified


Instructions for Linux / OSX OS:

Run the following commands in PowerShell window and note the response status code:

If they get an error in the console / terminal window with message such as FORBIDDEN / ACCESS DENIED / PROXY ERROR / etc., then they should try the same commands by providing the proxy details:

NOTE: Based on your network configuration, the --U parameter may need to be specified



If you are still getting an error response, then you will need to get the following URLs whitelisted on your network:


Machine Setup:

After you have ensured connectivity from your laptop to the Applitools server, follow the below steps to get your machine setup ready:

These steps are for Selenium-Java based Test Automation. If you are using any other combination, please contact [email protected] with specific details.

  1. Install JDK 1.8 or JDK 11 
  2. Based on the browser of your choice, download the corresponding browser driver for WebDriver
  3. Clone this git repo (https://github.com/anandbagmar/getting-started-with-visualtesting) on your laptop
  4. Open the cloned project in your IDE as a Maven project. This will automatically download all the dependencies
  5. Once all dependencies are downloaded, run the Selenium_HelloWorld_Base test directly from the IDE


schedule Submitted 2 years ago

  • 45 Mins
    Case Study

    Very often we work on a code-base that has been written by others, and some time ago. This code-base could be for the product code, or Test Automation code. 

    As the product life increases, evolution of the code-base is a natural process. However, there are various catalysts to speed up this evolution process:

    • More features / tests to be added, including increased complexity
    • People writing the code evolve - their learning, skillset
    • Delivery pressure means it is quite possible that correct decisions for implementation may not be taken. In other words, it is possible that short-cuts were taken in the implementation leading to spaghetti code / architecture

    People move on to different roles, new people join the team. Each has different opinions, perspectives and experiences.

    Am sure there are more reasons you can think of.

    Regardless, the challenge for a new person who starts working on such a complex code-base is enormous - as the person needs to start delivering "value".

    In this session, I will share various examples and experiences and as a result of being in such situations, the factors I looked at when enhancing the code-base to decide - should I refactor or rewrite the code-under-consideration to be able to move forward faster, while moving towards the long-term vision.

    Though I will focus on various examples of Test Automation, this session is applicable for any role that writes / maintains code of any nature.

  • Deepak Koul

    Deepak Koul - Taking biases into account : Why retrospectives promise more and deliver less

    20 Mins
    Experience Report

    Sprint retrospectives were designed to make the process of software development empirical. An approach where you can make mistakes but also reflect and learn from those mistakes.

    They possibly are the ‘A’ in the Deming’s Wheel (Plan-Do-Check-Adjust) that served as the origin of iterative development methods. Unfortunately, that is not how modern retrospectives work. They are rife with boredom, failure to admit mistakes, and lack of follow up if somehow two or three action items were identified.

    My interest in organizational behaviour and keen research links each of these problems to a cognitive bias.


    In this talk, I will list all of the biases that make retrospectives ineffective and ways in which we can mitigate them.

    For example, Recency bias is the tendency to focus on the most recent time period instead of the entire time period. Having retrospectives at the end of a sprint or maybe once a month makes people forget most of the problems they faced or the mistakes they made early in the sprint.

    But how do we fix this?

    Radical idea but how about a custom field called “Lessons learned” on every ticket you work on. Everybody keeps filling their observations per ticket during the sprint instead of waiting for the final retrospective.

    We can call them micro-retrospectives spread across the entire cycle that can be the fodder for the actual retro meeting.

    There are also other biases like sunk cost fallacy and halo effect that I am going to discuss in this session.

  • Vedavalli Kanala

    Vedavalli Kanala / Priyank Gupta - Your role is superfluous: Software delivery with skills-based, self organising teams

    45 Mins

    Traditional software delivery teams are layered with roles like user experience, project management, business analysts, developers, QAs, DevOps, etc. With the translation of business problems at multiple steps, each role induces a fitment drift in the devised solution. As part of this talk, we would like to present our experience from the last 7 years building products for clients delivered by teams with ZERO roles (No PM, BA, QA, DevOps). It presents the argument and evidence of why the notion of needed depth in every skill, every single time is an overkill, and how a small team of people who dabble with code and product thinking both can deliver solutions that are faster and better with minimum fitment drift. We outline the practices and rituals required to establish, operationalise and sustain skill-based teams. We also intend to discuss delivery objectives for software teams and how teams that organise themselves around business objectives deliver better products compared to one's setup with superfluous roles for analysis, testing, and management, etc. 

  • 45 Mins

    Have you heard of “flaky tests”?

    There are a lot of articles, blog posts, podcasts, conference talks about what are “Flaky tests” and how to avoid them.

    Some of the ideas proposed are:

    • Automatically rerun failed tests a couple of times, and hope they pass
    • Automatically retry certain operations in the test (ex: retry click / checking for visibility of elements, etc.) and hope the test can proceed

    Unfortunately, I do not agree with the above ideas, and I would term these as anti-patterns for fixing flaky / intermittent tests.

    We need to understand the reasons for flaky / intermittent tests. Some of these reasons could be because of issues like: 

    • timing issues (i.e. page loading taking time)
    • network issues
    • browser-timing issues (different for different browsers / devices)
    • data related (dynamic, changing, validity, etc.)
    • poor locator strategy (ex: weird & hard-wired xpaths / locators)
    • environment issue
    • actual issue in the product-under-test

    In this session, with the help of demos, we will look at the following techniques you can use to reduce / eliminate the flakiness of your test execution:

    • Reduce number of UI tests
    • Use Visual Assertions instead of Functional Assertions
    • Remove external dependencies via Intelligent Virtualization

    Demo will be done using the following and sample code will be shared with the participants

  • Pranjal Bathia

    Pranjal Bathia - Process and data flows - way to succeed in large scale initiatives !

    20 Mins

    Designing and planning for enterprise-scale initiatives is a tedious process, especially when the organization is big with 20k employees. Having different verticals to run different business functions like IT, marketing, finance, sales, engineering, etc. poses an additional challenge. Preparing for change that affects the whole organization impacts hundreds of systems and business processes.

    Fortunately, I had an opportunity to be a part of such an initiative. I would like to share how drafting data flow and process diagrams as per value streams helped us to articulate the current picture of different systems and processes. How it helped in identifying business-critical pain points and proposing solutions for those with ease. 

    In this session, you will also get a behind-the-scenes glimpse of Archimate and how I used it effectively in an enterprise-wide initiative.