"All tests in today's automated regression run have been marked as Untested. What happened?"

"No notifications are being sent for test runs on the channel"

"I pulled latest code, and the framework dependency shows compilation error"

"What does this new method in the framework do?"

How often do you hear such things within your team?

As Quality champions, we need to walk the talk. When we expect our developers to write quality code, write unit tests, build features without introducing bugs, the onus lies on us (as test engineers) to do the same. With almost every test engineering team writing automated tests to check functionality of their products and services, it becomes very important to ensure that the test automation framework and the test scripts are bug-free and follow good standards of software engineering.

It cannot be stressed enough that test automation code should be as good as production code. In order to build production-quality test automation framework and scripts, a number of steps can be taken at:

1. Code & System Level

2. Process & People Level

Our test engineering team went through a transition from having random & unexpected failing test runs to having greater confidence in the quality of the tests. Learn from this case study of our journey to ensure that end-end UI automated tests are built with quality in mind. We will also see demonstration of some of the use cases.

 
 

Outline/Structure of the Case Study

  1. What is product quality? (1 minute)
  2. Goals of automated tests (1 minute)
  3. Challenges we faced earlier in building and running "productive" tests - tests that deliver value (1 minute)
  4. What is test automation quality? (1 minute)
  5. End-end UI test automation at Carousell - An overview (1 minute)
  6. Fixing challenges to build confidence in the quality of tests
    1. Solutions we implemented at code & system level
      1. Automated checks on Pull Request with a focus on Unit tests for the test framework(10 minutes)
      2. Separating framework from test code and maintaining framework as a separate entity (5 minutes)
      3. CI pipeline practices (5 minutes)
      4. Test environment for tests (2 minutes)
      5. Convenient test results reporting for faster debugging of test failures (2 minutes)
    2. Solutions we implemented at people & process level
      1. Code reviews (2 minutes)
      2. Good and secure coding principles (2 minutes)
      3. Pull Request checklist (2 minutes)
      4. Team communication and planning (2 minutes)
  7. Key takeaways and learnings (2 minutes)
  8. Q&A (5 minutes)

Learning Outcome

You will learn:

  1. The importance of quality while building or maintaining an end-end test automation framework and why your test code should be treated like production code.
  2. What are various challenges and aspects of test quality that need to be kept in mind while writing end-end UI tests and a test framework (especially for those who have just started their journey in writing automated tests)
  3. Tools and practices that will help you ensure the quality and reliability of your test code

Target Audience

Any developer or test engineer who writes end-end UI automated tests. Our practices, techniques and tooling are around a Java based UI test automation framework but the principles can be applied to any other UI test automation stack

Prerequisites for Attendees

  • Knowledge of working with test automation framework
schedule Submitted 2 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • Deepti Tomar
    By Deepti Tomar  ~  3 weeks ago
    reply Reply

    Hello Abhijeet,

    Thanks for your proposal!
    Could you please update the Outline/Structure section of your proposal with a time-wise breakup of how you plan to use 45 mins for the topics you've highlighted?

    And, to help the program committee understand your presentation style, can you provide a link to your past recording or record a small 1-2 mins trailer of your talk and share the link to the same? 

    Thanks!

    • Abhijeet Vaikar
      By Abhijeet Vaikar  ~  2 weeks ago
      reply Reply

      Hello Deepti,

      Thank you for taking time to review my proposal :)

      1. Yes I will update the outline/structure by today.

      2. Yes I will share a link to one of my previous talks. 

      • Deepti Tomar
        By Deepti Tomar  ~  2 weeks ago
        reply Reply

        Hello Abhijeet,

        Sure, Thanks for your message! We look forward to the updates.

        Thanks!

        • Abhijeet Vaikar
          By Abhijeet Vaikar  ~  2 weeks ago
          reply Reply

          Hello Deepti,

          I have updated the outline structure with time slots per sections. IMO to do full justice to the session, I may require more than 45 minutes but for a case study there was only an option of max 45 minutes. What do you advise?

          I have added link to preview videos of my previous talk as well.

          Thanks!

          • Pallavi R Sharma
            By Pallavi R Sharma  ~  1 week ago
            reply Reply

            Hi Abhijeet

            Since the talk is aimed at intermediate knowledge holders, i would suggest the time breakdown of 10 mins

            1. What is product quality? (2 minutes)
            2. Goals of automated tests (2 minutes)
            3. Challenges faced earlier in building and running "productive" tests - tests that deliver value (2 minutes)
            4. What is test automation quality? (2 minutes)
            5. End-end UI test automation at Carousell - An overview (2 minutes)

            to be reconsidered is possible at your end, and please aim to finish the talk in the duration along with providing time for question and answer. 

            i hope you would look at it and consider the suggestion, let me know if any help is required in here. 

            • Abhijeet Vaikar
              By Abhijeet Vaikar  ~  1 week ago
              reply Reply

              Hello Pallavi,

              Just to confirm, do you mean that instead of spending 10 minutes on those points, I can consider shortening the time there and substituting it for other sections like Q&A?

               

              Thanks,

              Abhijeet

              • Pallavi R Sharma
                By Pallavi R Sharma  ~  1 day ago
                reply Reply

                the earlier breakdown was exceeding 45 mins, the one now u have i believe fits the 45 min window.

                this what the committee was looking at.

                Needed time reconsideration to not exceed 45 min . 

            • Abhijeet Vaikar
              By Abhijeet Vaikar  ~  2 days ago
              reply Reply

              Hello Pallavi,

              I have updated the breakdown accordingly. 

              Thanks for your inputs!

  • Pallavi R Sharma
    By Pallavi R Sharma  ~  1 month ago
    reply Reply

    Hi Abhijeet

    the slides at google you have shared requires access, for which i have sent a request. please provide it. I have a few questions on your proposal-

    a. how is your talk different than a talk being presented by a person who will be discussing about best practices at code level in terms of design patterns, handling objects data logs results while implementing selenium[framework]

    b. the end to end test automation you applied at your organization/ the framework will be showcased to audience? will the code be made available to public later or is it propertiary to your org? 

    c. is the code your team built now used widely in your org across projects?

    Thanks

    Pallavi

    • Abhijeet Vaikar
      By Abhijeet Vaikar  ~  3 weeks ago
      reply Reply

      To add on to my previous reply, one of the key practices that I want to share in this conference is the practice of writing unit tests for your test framework. This is something that we religiously follow at my organisation.

      • Pallavi R Sharma
        By Pallavi R Sharma  ~  3 weeks ago
        reply Reply

        I understand that, and this is a very impressive practice. Great slides even! All the best with your proposal Abhijeet. 

    • Abhijeet Vaikar
      By Abhijeet Vaikar  ~  3 weeks ago
      reply Reply

      Hello Pallavi,

      Thank you for your inputs!

      I have updated the link to my presentation to another source so that you can check without permissions. Can you please check again?

      For your questions, the talk walks through a case study of the processes, practices & tooling we established at Carousell in order to improve quality & reliability of the e2e test automation framework and scripts in an CI driven environment. So it covers much beyond design patterns and log files.

      Happy to discuss more!

      • Pallavi R Sharma
        By Pallavi R Sharma  ~  3 weeks ago
        reply Reply

        Thanks for providing access to the slides. 


  • Liked Shweta Sharma
    keyboard_arrow_down

    Shweta Sharma / Nikita Jain - Accessibility testing 101

    45 Mins
    Talk
    Beginner

    "This world is such a beautiful place to live in." If you can read the first sentence without any screen readers or assistance, you're privileged. As technologists, shouldn’t we be more empathetic towards differently-abled people and make all parts of our website accessible to them? In my humble opinion, the true power of technology can be identified when it reaches out to people of all kinds having different physical or psychological challenges. We not only legally bound to provide Accessibility but also it should be considered as our moral responsibility.

    As testers, we have a wonderful opportunity to contribute to Accessibility by ensuring that the site is accessible in many different ways. Although it is impossible to identify all the issues that exist around Accessibility in the world, we are lucky enough to still understand a majority of them. With this understanding, there have been many measures taken in order to make your site accessible. But, don’t forget - we are QA engineers. We got to ensure that the site is accessible as per the standards set by WCAG 2.0 (AA) by testing for accessibility using various tools and techniques.

  • Liked Rajdeep varma
    keyboard_arrow_down

    Rajdeep varma - The Joy Of Green Builds - Running Tests Smartly

    45 Mins
    Talk
    Intermediate

    So you have got a few UI tests and they are running in parallel, great! However, life will not be so sweet once these 'a few' turns into 'a lot'. We grew from a few to 1500 UI tests (although not particularly proud of this number, there are situations and reasons)

    We started with a simple parallel distribution of tests 3 years ago. As test count increased failure count run time increased along with increased flaky tests. Mobile tests had their own challenges (eg. device dropping-off, random wi-fi issues, etc) To keep up with this, we created a queue and workers based solution which could distribute the tests more efficiently (https://github.com/badoo/parallel_cucumber). Over time, we made more improvements, in particular:

    • Segregation of failures based on infrastructure issues and re-queue the tests
    • If a device/emulator malfunction, rescue the tests to another device
    • Repeating a single test on 100s of the worker in parallel to detect flakiness
    • Repeat a test if a known network issue
    • Terminating the build early if more than a certain number of tests have failed
    • Health check of each device, before each test to ensure reliability
    • Muting a test if failure is known, and highlight outdated mutes if the related task is fixed

    In this talk, I will talk about the initial challenges with running UI tests in parallel (Selenium and Appium), how we approached the queue based solution and continuous improvement of this solution; finally, how attendees can use it at their workplace or create their own solution based on our learnings.

  • Liked Rabimba Karanjai
    keyboard_arrow_down

    Rabimba Karanjai - Testing Web Mixed Reality Applications: What you need to know for VR and AR

    Rabimba Karanjai
    Rabimba Karanjai
    Researcher
    Mozilla
    schedule 1 month ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    There are already over 200 million users consuming VR applications by 2018. And with Google, Mozilla pushing WebXR capabilities in browser and vendors like BBC, Amnesty International, Universal, Disney, Lenskart and a lot of them adopting them to their websites, we will soon see a huge rise i demand for Web VR and Mixed Reality applications.

    But how do you test them in scale? How do you define "smooth" as opposed to just responsive?

    In this talk I will go over some key details about the WebXR specification. The work that Mozilla, Google and the W3C Immersive Web Group is doing. The differences between testing a regular web page and a Mixed Reality enabled one. What to watch for and how you can automate it.

  • Liked Krishnan Mahadevan
    keyboard_arrow_down

    Krishnan Mahadevan - My experiments with Grid

    45 Mins
    Tutorial
    Intermediate

    Everyone starts off with a simple grid setup which involves a hub and one or more nodes.

    This traditional setup is a good start but the moment one starts to get serious with the selenium grid and decide to house their own selenium grid for their local executions, that is when issues start.

    My experiences with the Selenium grid in the past couple of years has led me to get introduced some of the most prevalent problems with maintaining an in-house selenium grid.

    • Nodes get unhooked randomly due to network glitches.
    • Nodes introduce false failures due to memory leaks.
    • Selenium Grid running out of capacity.
    • Nodes require OS upgrades/patches etc.
    • Needing to deal with auto upgrades by browsers (especially chrome and firefox)

    Some of these issues I managed to fix by building a "Self Healing" Grid wherein the nodes automatically get restarted after they have serviced "n" tests. But that still didn’t solve many of these other problems.

    That was when I felt, what if there was an on-demand selenium grid.

    What if the Grid could do the following ?

    • The Grid auto scales itself in terms of the nodes based on the current load.
    • The Grid does not require a lot of infrastructure to support it.
    • The Grid can plug itself into some of the cloud providers or leverage a solution such as Docker so that the nodes can be spun and shutdown at will.

    That was how the idea of "Just Ask" an on-demand grid was born.

    Just-Ask is an on-demand grid. It has no nodes attached to it.

    It’s designed to spin off nodes on demand, run test against the newly spun off node and after test runs to completion, clean-up the node as well. The node can be backed by anything. It could be Docker (or) it could be a VM running on any of the popular clouds.

    The session aspires to walk the audience through with my experiments with the selenium grid, my learnings on the selenium grid internals and how I used all of that knowledge to build my own On Demand Selenium Grid. What better avenue to share these learnings than a Selenium Conference.

    The session will introduce the audience to the grid internals and their concepts such as

    • What is a Selenium Remote Proxy ? What is it used for? What can you do with it?
    • What is a Hub (or) Node level Servlet ? When would you need one ?
    • All of this followed by a quick demo on "Just Ask", the on-demand grid that I have built and open sourced here: https://github.com/rationaleEmotions/just-ask

  • Liked Amit Rawat
    keyboard_arrow_down

    Amit Rawat - Is Puppeteer better than Selenium

    45 Mins
    Demonstration
    Intermediate

    Puppeteer is a Node js library (developed by Google Chrome team) to control Chrome and Firefox and is getting lot of traction recently because of its amazing capabilities. It has already become so popular that it has got 50K+ stars on Github against Selenium's 15K+ stars.

    In the last Google I/O event, this tool's capability has been showcased and it has been perceived as the next generation Web Test Automation Tool.

    Is Puppeteer better than Selenium? The answer is 'No', and I will cover 'why' in detail during this talk. I will show some live examples to demonstrate that Selenium can also do all those advance things which Puppeteer promises to do.

  • Liked Babu Narayanan Manickam
    keyboard_arrow_down

    Babu Narayanan Manickam - Deep Learning Based Selenium Test Failure-triage Classification Systems

    45 Mins
    Talk
    Intermediate

    Problem Statement:

    While running thousands of automated test scripts on every nightly test schedule, we see a mixed test result of pass and failures. The problem begins when there is a heap of failed tests, we caught in the test-automation trap: unable to complete the test-failure triage from a preceding automated test run before the next testable build was released.

    Deep Learning Model:

    The classification was achieved by introducing Machine Learning in the initial months and followed by Deep Learning algorithms into our Selenium, Appium automation tests. Our major classification was based on the failed test cases: Script, Data, Environment, and Application Under Test and that internally had hundreds of sub-classifications.

    To overcome this problem, we started to build and train an AI using Deep Learning, which simulates a human to categorize the test case result suite. Based on the test result failure for each test, the AI model would predict an outcome through API, categorizes and prioritize on the scale of 0 to 1. Based on the prediction model, the algorithm takes appropriate response actions on those test cases that are failed like re-run test again or run for different capabilities. We kick-started this by gathering the historical data set of 1.6 million records, which was collected over a 12 months period, data including the behavior of the test case execution and the resulting suite.

    This Deep Learning-based algorithm has been provided the quality to break down the new defects based on its category, and a classification score is given on a scale of 0-1. We’ve also established a cutoff threshold based on its accuracy of improving, and to group the failed test cases based on its similarity. Classification of the test cases is done in high granularity for sophisticated analysis, and our statistical report states that the classification of the defects has been increased with 87% accuracy over a year. The system has been built based on the feedback adapting models, where for each right classification it will be rewarded and for the wrong, a penalty is given. So whenever receiving a penalty the system will automatically enhance itself for the next execution.

    The algorithm has a powerful model for detecting false-positive test results calculated using the snapshot comparisons, test steps count, script execution time and the log messages. Also, the model has been built with other features like – duplicate failure detection, re-try algorithms and defect logging API, etc.

    The entire classification system has been packaged and deployed in the cloud where it can be used as a REST service. The application has been built with its own reinforcement learning where it uses the classification score to enhance itself and this is programmed to perform in an inconclusive range.

    In sum, this deep learning solution can assist all Selenium testers to classify their test results in no-time and can assist to take next steps automatically and allow team could focus its efforts on new test failures.

    Link: https://github.com/testleaf-software/reinforced-selenium-test-execution

  • Liked Lavanya Mohan
    keyboard_arrow_down

    Lavanya Mohan / Priyank Shah - Analytics - Insights from unsaid customer feedback

    45 Mins
    Talk
    Beginner

    Are we investing our efforts in building things that actually matter? Is the new feature that we rolled out adding value to the customer? Is the new release doing better than the previous releases? How do we get answers to these questions and more? Analytics is our answer!

    Analytics information helps not just the business teams but also QAs, Devs, PMs and other members of the project in multiple different ways. It could help us uncover some critical issues, it could help us understand customer sentiments better, it could help us get a broader picture of how the customer actually uses the product and whether it was how we intended it to be, it can help us get ideas about what small or large changes the customers are looking for without them having to explicitly tell us.

    Analytics is important information to us. So, it is also critical that the information is correct. That means analytics information produced also needs to be tested and validated.

    This talk is intended to understand the testing of analytics events and why they are important to us.

    In this talk, we will cover our experience of how analytics information helped us understand our customers better and invest our time in building the right things. We will also cover how we validated it to ensure that the data that we were seeing was actually correct. In addition to this, we will also briefly cover some details about other sources of information that can be looked at if we are working in a mobile world.

    Please note: We’re open to tune the proposal based on feedback

  • Liked Smita Mishra
    keyboard_arrow_down

    Smita Mishra - Vision Boards - Project your goals

    Smita Mishra
    Smita Mishra
    CEO
    QAZone Infosystems
    schedule 1 month ago
    Sold Out!
    20 Mins
    Talk
    Intermediate

    How do teams share their understanding on the common goals? It is either audio or visual. Recording each talk and storing them ( tagged) is not the most effective way to share common knowledge. Sketching is not new to agile teams. We are taking it a step forward in the form of Vision Boards. Vision Board – is creative visualization of your goals. While our focus in this talk, remains on- how teams could use the board, Individuals use these in order to make their life goals into reality. There are pictures or sketches of what they want – all pasted together on one board – so they constantly remind themselves of their ultimate goals in the bigger scheme of things. These goals may not be achievable with one task. They may need a series of tasks which do not directly seem to be connected with the goal. But these visualizations captured - are very good indicators of what success means to one.

    We used Vision Boards to visualize our customer experience, their reactions and expected patterns of use for our application. This board single handedly kept all our teams aligned and as many changes happened – the teams knew their true north when they were discussing how to design the screens and which features to build on (priority). Our already agile teams were constantly looking at the short term goals of prioritized features, but vision board helped them reduce chaos and clutter and saved lot of time on understanding the overall requirement - it also served as the basis for User Stories.

  • Liked Gaurav Singh
    keyboard_arrow_down

    Gaurav Singh - How to build an automation framework with selenium : patterns and practices

    Gaurav Singh
    Gaurav Singh
    Product Engineer
    Go-Jek
    schedule 3 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    With an ever increasing no of businesses being conducted on web the testing need to write automated tests for the app's UI is something that can never be ignored. As you all know Selenium provides an API that enables us to do this quite effectively.

    However, when tasked with setting up the automation framework, there are a lot of questions that arise in the minds of aspiring test developers regardless of what level they are in their career.


    Some of such questions are:

    1. How does one actually go about the business of building a robust and effective automation framework on top of selenium?
    2. What are the elementary building blocks to include in the framework that an aspiring automation developer should know of?
    3. How should we model our tests? XUnit style vs BDD?
    4. Are there good practices, sensible design patterns and abstractions that we can follow in our code?
    5. What are some of the anti patterns/common mistakes we should avoid

    A lot of literature, documentation and blogs exists on these topics on the web already.

    However In this talk,

    I would combine this existing knowledge and my years of experience in building automation frameworks and breakdown these elements and walk you through exactly the sort of decisions/considerations and practices that you can take while starting to implement or improve the UI automation for your team.

    Hope to see you there!

  • Liked Syam Sasi
    keyboard_arrow_down

    Syam Sasi - When ansible meets selenium grid - Story of building a stable local iOS simulator farm

    Syam Sasi
    Syam Sasi
    Quality Engineer
    Carousell
    schedule 2 months ago
    Sold Out!
    45 Mins
    Case Study
    Beginner

    Running parallel automation tests for iOS apps is always tricky since iOS simulators need Apple hardware. There are many docker based solutions available for building a local device lab, but fails when it come into the perspective of apple iOS simulators.

    The mighty selenium grid is a good is a choice to the above problem, but configuring the grid and nodes is a tedious task if you want to scale up the infrastructure .

    In this talk , I will explain

    • How Ansible scripts helped to set up the setup the selenium grid and node configuration in 60 seconds.
    • How to customise selenium grid to make it more stable.

  • Liked Vijay Ravindran
    keyboard_arrow_down

    Vijay Ravindran - Automation on Unity Engine application - Ways to automate Unity Game Engine applications using Unity Test Runner and Autoplay,Selenium tool.

    Vijay Ravindran
    Vijay Ravindran
    Sr. QAE
    Trimble
    schedule 2 months ago
    Sold Out!
    45 Mins
    Tutorial
    Beginner

    Unity 3D game engine is used to develop games and enterprise application development in multiple platform which is compatible across devices. It is an excellent cross development tool, especially used for Next-Gen technologies like augmented and virtual reality applications.

    As everyone is familiar with the terms like appium, selenium for automation in mobile application, it's quite a big challenge and uncertain when it comes to automation in mobile application built in Unity engine. Here we will discuss about the solution on automating unity built application using following methods:

    1. Using Unity Test Runner - which comes with Unity application using [UnityTest] attribute
    2. Using AutoPlay, Selenium - Similar to web testing with inspector and web driver protocol support.

    Method 1 : Unity Test Runner

    • The Unity Test Runner is a tool that tests your code in both Edit mode and Play mode, and also on target platforms such as Standalone, Android, or iOS.
    • The Unity Test Runner uses a Unity integration of the NUnit library, which is an open-source unit testing library for .Net languages.
    • [UnityTest Attribute] - Addition to the standard NUnit library for the Unity Test Runner.

    Method 2: Using AutoPlay and Selenium

    • Inspect game scene
    • Manage game on real devices (install / uninstall, start / stop, etc)
    • Run Selenium tests (with all base selenium actions like click, getText, swipe, get elements property, etc)
    • Write test on any programming language (Java, C#, python, etc)

    Looking forward to meet you all in SeleniumConf 2020

  • Liked Martin Schneider
    keyboard_arrow_down

    Martin Schneider / Prabhagharan D K - Building and scaling a virtual Android and iOS device lab

    45 Mins
    Case Study
    Intermediate

    Virtual mobile devices (emulators/simulators) are a cost-effective and straightforward alternative to testing on physical devices. We showcase how to set-up and scale an Android emulator farm using Appium, Docker and SQS and how it fits into our larger testing and quality strategy.
    Maintaining physical test devices for mobile automation can be expensive and time-consuming. On top of the initial investment, you need to consider maintenance cost, replacement devices and efforts for manual scaling. On the other side of the spectrum, cloud providers take care of these restrictions, but their services can come at a hefty price tag, especially when your use-case requires a large number of devices. We present a middle path and demonstrate how to use virtual devices to build a reliable and scalable in-house device lab using Docker and Appium.

  • Liked Kushan Amarasiri
    keyboard_arrow_down

    Kushan Amarasiri - Making test automation with Selenium awesome with xPath Generator

    45 Mins
    Demonstration
    Beginner

    XPath Generator is an API developed in Java which is free and opensource. It helps any test automation enthusiastic people to capture xPaths and other Selenium locators for any given web URL. It will generate the optimised xPath and show how it was derived.

  • Liked Khanh Do
    keyboard_arrow_down

    Khanh Do - Leveraging Artificial Intelligence to create self-healing tests

    Khanh Do
    Khanh Do
    QA Architect
    Kobiton
    schedule 3 months ago
    Sold Out!
    45 Mins
    Tutorial
    Intermediate

    A key requirement for successful test automation is to get past the brittle or fragile nature of test scripts. Any Selenium (or Appium) developer has encountered the dreaded "NoSuchElement Exception". A locator that worked yesterday may fail today. What's a test engineer to do?

    Fortunately the field of AI provides promising solutions and allows for the creation of self-healing tests. Tests that can find elements across all environments. Tests that can learn from "human-in-the-loop" intervention and work perfectly thereafter. Imagine automated tests that "just work"!

    This session will look at how to apply the latest in AI and Machine Learning technologies to improve your test scripts. With the plethora of new open source AI libraries made available by companies such as Google, the ability to leverage AI in your applications is more accessible than ever.

    This session will be a primer on AI technologies and how they can be utilized for perfect test automation.

  • Liked Praveen Umanath
    keyboard_arrow_down

    Praveen Umanath - State-of-the-art test setups: How do the best of the best test?

    20 Mins
    Talk
    Intermediate

    The best engineering teams are releasing code hundreds of times in a day. This is supported by a test setup that is not just fast, but robust and accurate at the same time.

    We look at data (anonymized) from millions of tests running on BrowserStack, to figure out the very best test setups. We also analyze the testing behavior of these companies—how do they test, how frequently do they test, how many device-browser-OS combinations do they test on. Do they gain speed by running more parallels or leaner test setups?

    Finally, we see how these steps help these teams to test faster and release continuously, and how it ties-in to the larger engineering strategy.

  • Liked Shi Ling Tai
    keyboard_arrow_down

    Shi Ling Tai - Accessibility and Testability - two sides of the same coin

    Shi Ling Tai
    Shi Ling Tai
    CEO
    UI-licious
    schedule 2 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    What makes a UI testable?

    If you've ever struggled with testing icon buttons, sliders, datepickers, you know what I mean. Those things are a pain to test, and the tests you've written is probably not very maintainable either.

    Here's an argument on making UI more accessible - it's means your UI is more testable too - let me show you what I mean.

  • Liked Shi Ling Tai
    keyboard_arrow_down

    Shi Ling Tai - Start with the scariest feature - how to prioritise what to test

    Shi Ling Tai
    Shi Ling Tai
    CEO
    UI-licious
    schedule 2 months ago
    Sold Out!
    20 Mins
    Talk
    Beginner

    It can be intimidating for inexperienced teams embarking on their test automation journey for an existing code base. There is so much to test, and so many ways to test. I often see teams stuck with debating on where to start and what tools to use and best practices:

    "We should start from unit tests"

    "No, integration tests are better!"

    "Should we use tool A or tool B?"

    I see this play out all the time, and I've been there before. And the worst that could happen is decision paralysis and inaction.

    The bigger question really is "What to test?".

    My rule of thumb is "Start with the scariest code". I'll share with you my framework for evaluating the ROI of writing a test for a feature and prioritising what to test.