Conference Time
Local Time

Pre-Conf Workshop

Thu, Jun 28
Timezone: Asia/Kolkata (IST)
09:30

    Registration - 30 mins

10:00

Day 1

Fri, Jun 29
08:30

    Registration - 30 mins

10:00

    Welcome Address - 15 mins

10:15

    Coffee/Tea Break - 15 mins

10:30
11:30
12:30
  • Added to My Schedule
    keyboard_arrow_down
    Manoj Chiruvella

    Manoj Chiruvella - Smart Test Failure Analysis with ELK (Elastic Search, Log Stash & Kibana)

    schedule  12:30 - 12:50 PM IST place Kalinga Hall 2 people 93 Interested star_halfRate

    In contemporary test automation world, we are running 1000's tests every day. Even though most of our test cases are reliable and stable, debugging failures will consume lot of time, if not handled appropriately. This is an effort to reduce the analysis of failures and come up with probable root cause for each and every failure.

    In this talk, we will explore all the approaches we have taken to drastically reduce the time to debug the failures & let you concentrate more on adding new tests. We will also talk about the approaches which could not yield results for us initially & why. We will look into building a failures' dashboard, driven by pattern based classification of logs(both server & automation tool) with reporting.

  • schedule  12:30 - 12:50 PM IST place Lalit 1 people 70 Interested star_halfRate

    Build, deploy and test is what we all are familiar with and is commonly termed as continuous integration in our humungous world of software products.


    Each build has new code commits, fresh changes in UI/functionality and sometimes even legacy code gets altered to accommodate changes but we always execute same old regression tests with same old stale data. Isn’t it unfair?


    Not arguing that regression is to be executed on same static data, but what if your test data is equally well framed to match your regression testing requirements but in each run, it is unique and has about 40% more chances of finding new bugs?

    Eyes wide open?

    Yes, you heard it. In my workplace we did an experiment which further led to the creation of “Continuous Test Data Generators” and today we execute about 1000+ test cases on fresh data and has helped find regression as well as bugs in new functionalities with very little or no changes in scripts or data drivers.

    In this session, will be showcasing:

    — How and why you need continuously generated fresh test data for your daily, nightly or even smoke tests. How to create such generators in few simple steps.

    — How this test data helps find bugs and keep your test environment fresh and lively with new data and hence appears to be like production data which you never ever get to see in test environments.

    CTDG is a conglomeration of automated test data generation as well as back-end data injection in order to achieve much more speed and accuracy.

    The test data generated is goal-oriented and pathwise, so as no data is raw data, thus needing some amount of human intervention in terms of the application under test.

13:00

    Lunch Break @ Pool Side - 60 mins

14:00
15:00
15:45

    Coffee/Tea Break - 15 mins

16:00
  • Added to My Schedule
    keyboard_arrow_down
    Srinivasan Sekar

    Srinivasan Sekar / Sai Krishna - Code Once Test Anywhere: On Demand Private Appium Device Cloud using ATD

    schedule  04:00 - 04:45 PM IST place Kalinga Hall 2 people 109 Interested star_halfRate

    Mobile Test Automation is increasingly becoming very important. Almost all web applications are responsive these days and it's very important to test how the application works across devices. The same is true with the native application as well. At the same time, the number of devices and the custom OS versions on devices are also vast. This means that it's harder for a tester to manually run the automated tests over a list of devices to get device coverage and quicker results over every feature development.

    We came up with a solution of executing tests in distributed or parallel fashion across remote devices from anywhere in the network using Appium Test Distribution. Same framework is officially used by Appium members for Beta testing of Appium.

    USP of ATD over other Market Solutions:

    • Device Cloud:
      • Setup Devices anywhere within a network, ATD executes remotely without Grid
      • Never worry about device location in network.
    • Plug and Play:
      • Connect your android/iOS devices or Emulators/Simulators and just execute tests.
    • Multiple TestRunner:
      • TestNG and Cucumber
    • Parallel Test Execution:
      • Runs across all connected iOS, Android real devices and Simulators/Emulators
    • Test Coverage:
      • Parallel(Run entire suite across all devices, which gives device coverage)
      • Distribute(Run tests across devices to get faster feedback).
    • Device Management:
      • Manage devices remotely using Device Manager.
    • Reporting:
      • Covers detailed crashes logs from Android and iOS.
      • Appium Servers logs
      • Screenshots on Failures and On Demand Video logs
      • Reporting Trends for multiple builds
    • Manual Access to Remote Devices - OpenSTF support

    Who loves/uses ATD?

    ThoughtWorks, CeX, Jio, TravelStart, M800, Reward Gateway and lot more.

  • Added to My Schedule
    keyboard_arrow_down
    Irfan Ahmad

    Irfan Ahmad - Testing as a Chat : Test anything , from anywhere as a chat conversation

    schedule  04:00 - 04:45 PM IST place Lalit 1 people 76 Interested star_halfRate

    Recently DevOps movement has given rise to a need to maintain visibility across the teams.How will your testing team keep pace with this change in future?

    Chat based Testing is basically extending ChatOps to testing that will keeps the people involved in software development more connected and facilitates conversation-driven development.
    It allows you to aggregate information about processes, discussions, QA, and testing. This flow improves the delivery of information about the status of the project to all members of the team.Also It allows to present, demonstrate, and reproduce an issue to the other teams.
    which can be fixed before they reach users or customers—with minimal disruption to the delivery pipeline.

  • Added to My Schedule
    keyboard_arrow_down

    Lightning Talks

    schedule  04:00 - 04:45 PM IST place Lalit 3 people 28 Interested star_halfRate
17:00
18:00
19:00
19:45

    Evening Reception, Dinner and Networking @ Grand Ball Room - 105 mins

Day 2

Sat, Jun 30
08:30

    Registration - 30 mins

09:00
  • Added to My Schedule
    keyboard_arrow_down
    Maaret Pyhajarvi

    Maaret Pyhajarvi - Intersection of Automation and Exploratory Testing

    schedule  09:00 - 09:45 AM IST place Kalinga Hall 2 people 154 Interested star_halfRate

    I’m using exploratory testing to design which tests I leave behind as automated. Creating automation forces me to explore details in a natural way. When an automated test fails, it is an invitation to explore. The two sides of testing, automation, and exploration, complement each other. These intertwine the considerations of the best for today and for the future.

    For great testing bringing value now as well as when we are not around, we need to be great at testing - uncovering relevant information - and programming - building maintainable test systems. At the core of all this is learning. With our industry doubling in size every five years, half of us have less than five years of experience. We all start somewhere on our learning journey.

    In this talk, we look at the skills-focused path to better testing in the intersection of automation and exploratory testing. We can arrive at the intersection by enhancing our individual skills, or our collaboration skills. What could you do to become one of those testers who companies seek after that work well in the intersection, giving up the false dichotomy?

10:00

    Welcome Address - 15 mins

10:15

    Coffee/Tea Break - 15 mins

10:30
  • schedule  10:30 - 11:15 AM IST place Kalinga Hall 2 people 144 Interested star_halfRate

    When it comes to most things in life, most people tend to think more is better. But does this maxim hold true to automated testing? Should you test every possible browser/OS combination possible with every functional workflow because an executive thinks it’s a good idea? Does this mean you need to build the biggest Selenium you can grid to test on the most OS and browser combinations as possible? Or maybe even leverage a 3rd party infrastructure solution?

    Testing on as many platforms as possible may not always be the best approach to test execution, even though it may seem that way at first. The best approach is to test strategically, not just indiscriminately testing as much as possible just because you can. Sometimes this means going big and testing at a massively parallel scale, others times this may not be the best approach.

    When considering how much to parallelize your tests, there are many things to think about, including how well your framework supports parallel execution, how robust your execution environment is, and how much load your non-prod environments can handle. All of these factors will impact how to parallelize your tests.

    I will discuss the best approach to determine how to optimize test execution parallelization, both in terms of what considerations and tradeoffs to make and also how to set implement parallelized testing common frameworks.

    Topics that I will cover include:

    • Use of Google Analytics and other site data to drive platform testing needs
    • Test structure & framework choice and their impact running tests in parallel
    • Situations when massively parallel testing is appropriate & pitfalls of over-parallelization
    • Determining best approaches and coverage models for unit, smoke, integration, and regression testing.
    • A brief demonstration of parallelization approaches in several common frameworks, taking theory and putting it into action
  • Added to My Schedule
    keyboard_arrow_down
    Jim Holmes

    Jim Holmes - Experience Report: Changing Testing Culture in a Ginormous Company

    schedule  10:30 - 11:15 AM IST place Lalit 1 people 51 Interested star_halfRate

    How do you change culture, mindset, and skills in a global organization entrenched in practices that were outdated 20 years ago?

    One small, frustrating step at a time.

    In this talk I'll share my experiences working at a Fortune 10 company where I helped small teams of testers on three different continents dramatically change how they helped their projects deliver value to the company. I'll talk about dealing with people (NOT RESOURCES!), helping teams improve their technical skills, getting non-technical testers comfortable with writing automation code, navigating ways through corporate bureaucracy and fifedoms, and most importantly how to get advocates at levels that can actually help you with change.

    This talk will be full of abject failures we suffered, but also highlight some of the amazing changes we saw over a three year period.

    Slides: https://speakerdeck.com/jimholmes/changing-testing-culture-in-a-ginormous-organization

  • Added to My Schedule
    keyboard_arrow_down
    Mike Lyles

    Mike Lyles - Visual Testing: It’s Not What You Look At, It’s What You See

    schedule  10:30 AM - 12:00 PM IST place Lalit 3 people 57 Interested star_halfRate

    How many times have you driven all the way home, only to realize you didn’t remember anything from the drive. Your mind was in a different place, and you were driving on autopilot. Or maybe you walk out to your garage and get in your car every day and are so used to the surroundings that you don’t notice that something has been taken or moved to a new location. When our eyes are so familiar with the things we see every day, our brains are tricked into believing that there is nothing that has changed.

    In the popular USA TV show, “Brain Games”, we find many exercises where you, the audience, are asked to pay attention and focus on what is happening. That simple focused attention gets the majority of people in trouble, because the art of focusing on a specific area or activity prohibits the audience from seeing things that are going on around them. This “inattentional blindness” causes key details to be missed. Your brain is the most complex tool that you will ever have in your possession. However, with a highly complex tool comes the need to ensure that it is used appropriately and to its full potential.

    In the testing profession, such focused concentration, leading to “inattentional blindness” can be detrimental to the success of the product being delivered. As testers, we must find a way to constantly challenge our visual images and prohibit our brain from accepting that there are no changes which could impact the quality of the product. It is critical to be aware of the entire surroundings of the testing activity and to be able to recognize and call out changes that may be easily overlooked without an attention to detail.

    Mike Lyles will challenge the audience to literally “think outside the box”. The audience will be given specific exercises to show how that the human mind sometimes overlooks details when they seem visually insignificant or unrelated. We will examine how testers can become better prepared for such oversights and discuss strategies that can be used immediately in your organizations. The key to eliminating the risk of oversight and missed problems is learning how to identify the areas where you may have originally ignored a focused effort.

11:30
12:30
13:00

    Lunch Break @ Pool Side - 60 mins

14:00
  • Added to My Schedule
    keyboard_arrow_down
    Kushma Thapa

    Kushma Thapa - Running Automation tests twice as faster: How we got rid of Selenium Grid dependency.

    schedule  02:00 - 02:45 PM IST place Kalinga Hall 2 people 91 Interested star_halfRate

    With the rapid scaling of our health care application under consideration, in terms of number of users and the functionality involved as well as migration from a legacy system, the need to increase the coverage of automated tests for both systems seemed vital. Also, running a large no of test suites on daily basis against multiple browsers was critical.

    As a part of this transition, the existing popular concept of using a continuous integration server with a grid machine and individual node machines poised issues in terms of cost as well as performance. Increase of automated test led to the increase in demand of node machines distributed across different machines on the same/different network. While the network latency for a limited number of node machines could be ignored, it eventually caused a significant reduction in performance.

    Being part of a large enterprise, changing the architecture was not a feasible solution. In this talk we will talk about how selenium grid was removed from the equation to achieve automated tests that ran twice as faster and how it can be integrated on enterprise level requirement.

  • Added to My Schedule
    keyboard_arrow_down
    Smita Mishra

    Smita Mishra - Test What sells more - UX

    schedule  02:00 - 02:45 PM IST place Lalit 1 people 36 Interested star_halfRate

    For so many years, testers have focused on functionality, ensuring the applications are working properly, stable, and reliable. However, in today’s world, with so many competing applications, products, and software packages, it is critical that testers also examine the UI/UX element of each deliverable. If your organization is not building UI/UX testing into your test planning, then you are increasing the risk that your product may be left behind by competitors. And if you have not experienced the art and craft of UI/UX test planning and execution, then this workshop will help you and your organization learn the proper methods to do so.

    There are very few known techniques that can accurately and consistently shape a good User Interface (UI) or User Experience (UX). While most of the companies are spending a lot of time and energy deciding the colors and bars on the screen, frankly beauty comes second. It’s also a known fact that users resist change. So how can you test for the acceptability of these changes in a way that’s beneficial to your company in terms of revenue, inbound marketing, and customer acquisition, without offending customers to the point that they make a massive exodus and go to your competitor?

    The goal to success is making the customers happy and pleased with the product, while ensuring they do not feel foolish or confused. In this workshop, we will go through case studies of real world apps and stories of evolving UI and UX. We will observe how that impacts the User Experience for better or worse. We will look at building UX testing strategy and implementing UX testing techniques. We will look at the popular tools used to perform UX testing and how to best use them in each phase of not just testing but UX designing overall. Some of the tools to be discussed are - testflight / heap/hubspot / eyetracking tools / A/B testing / Screen recording tools.

  • Added to My Schedule
    keyboard_arrow_down
    Marcus Merrell

    Marcus Merrell - Break Up the Monolith: Testing Microservices

    schedule  02:00 - 02:45 PM IST place Lalit 3 people 99 Interested star_halfRate

    Microservices is more than a buzzword: it’s an industry-wide tidal wave. Companies are spending millions to break up monoliths and spin up microservices, but they usually only involve QA at the very end. This talk centers around real-world experiences, posing questions you can ask your developers/product people, and offering solutions for you to help make your service more discoverable, more testable, and easier to release.

    In this session, we'll cover:

    • How micro is micro?
    • Documentation & Contracts
    • Versioning API Endpoints
    • Cross-team communication/collaboration
    • Definition of Done
    • Feature Flagging
    • Testing Pyramid
    • When to Get Selenium Involved
    • The Story of 13 Systems—The "Screenplay"
14:45

    Coffee/Tea Break - 15 mins

15:00
  • schedule  03:00 - 03:45 PM IST place Kalinga Hall 2 people 69 Interested star_halfRate

    It's a common scenario - you are starting a new job as a test engineer, excited to improve your new organization's testing culture, eager to instill confidence in the software, and BAM! instead of writing new tests, you are tasked with fixing legacy ones! If this sounds familiar, this talk is for you. I'll cover strategies for taking inventory of flaky tests and setting goals to address flakiness across a test suite. In addition, you will learn to get to the bottom of what's causing flakiness in your tests, and to communicate with other engineering teams about those tests. Although everyone's application has its own quirks, and every organization has its own workflows, this talk aims to give advice that anyone can use in their own tests and when dealing with their fellow engineers. By the end of this talk, you will feel confident enough to debug your own flaky tests, and get to the fun part - writing new ones!

    Here are some alternate presentation titles, for your amusement:

    • Keep The Flakiness for Your Croissant: How to Un-Flake Your Automated Test
    • Bake The Flake Out of Your Tasty Test Cake
    • Flake It Off, Flake It Off: How to Un-Flake Your Flaky Test (T-Swift theme)
  • Added to My Schedule
    keyboard_arrow_down
    Diego Molina

    Diego Molina - The Holy Trinity of UI Testing

    schedule  03:00 - 03:45 PM IST place Lalit 1 people 59 Interested star_halfRate

    Sometimes it is hard to know what to test in a web application, and the first step before testing is defining what we want to test. This may sound trivial, but in reality this is often not done properly. We tend to oversee the obvious and we test without knowing what we want to accomplish.

    What do we want to achieve? Validate user behaviour? Check if the page design is responsive on different devices? Or maybe to know that our web application looks like we expect.

    When we know the purpose of our test, we can start planning, coding, executing and improving our tests. But most importantly, we will know what approach we can use to develop the test.

    Functional, layout and visual testing are the three pillars of the UI testing trinity. We can use these approaches to develop focused tests, tests that are asserting a specific aspect of our web application.

    But how can we identify what approach to use? When should we combine them? There is an information overflow that presents a huge variety of tools that can help us to test through any of these approaches. Sadly, this large amount of information is making us focus more on the tools instead of focusing on the testing strategy.

    The intention of this talk is to break in pieces the process of identifying how to develop a focused test, and more importantly, to understand when it makes sense to combine functional testing with layout or visual testing, and what to consider before using layout or visual testing.

    The talk will then go deeper through scenarios and code examples that show how to create layout and visual tests. It will also discuss scenarios where a functional test is not enough, or where a visual test is better than a layout test. This talk’s main goal is to offer a different perspective when testing a web application through the UI testing trinity.

    If you are interested in how to integrate layout or visual testing to your current workflow, you should attend this talk!

    Note: Thanks to the feedback I got after presenting this topic at SauceCon 2018, I have been able to make nice improvements to the content that will be helpful for the attendants.

  • Added to My Schedule
    keyboard_arrow_down
    Sneha Viswalingam

    Sneha Viswalingam - Building The Blocks of Trust In Automation

    schedule  03:00 - 03:45 PM IST place Lalit 3 people 82 Interested star_halfRate

    Having followed best practices set from previous Selenium Conference talks, my team was able to shift from flaky tests to stable and reliable automated tests. During that time, I learned the importance of building trust in the test suite to unite the team as a whole. Once trust was established in the automated tests, it became crucial to the overall software development lifecycle. It has been an interesting journey to gain the confidence of the organization and have them believe that the automation effort has their backs.

    In this talk I will cover the following topics:

    1. Strategies that I used to make the tests reliable
    2. Explaining why it was important to train the manual testers to write feature files and thus help expand the automation suite using their subject matter expertise
    3. Presenting ways to improve visibility in automation to reinforce trust

    By implementing these steps in my organization, I have built trust not only within the test suite but into the team as a whole.

16:00
  • Added to My Schedule
    keyboard_arrow_down
    Bhupesh Pant

    Bhupesh Pant - Client side health and up-time monitoring tool with Selenium Webdriver(Synthetic Monitoring)

    schedule  04:00 - 04:45 PM IST place Kalinga Hall 2 people 97 Interested star_halfRate

    Application monitoring is the essential part for a heathy application. In the present scenario most of the applications are inclining towards cloud infrastructure. A reliable application has a deep desire to achieve 100% up-time. Tools like NewRelic , AppNeta , AppDynamics are great for server side monitoring.

    But most of the time we ignore client side application heath check. This session is dedicated to the requirement and fixing of Client side application monitoring. There are very less tools available in market for Client side monitoring which can run 24X7 in the production web application and report an issue without any manual interference’.

    To achieve client side monitoring I developed a Monitoring tool using Java Spring, AngularJS, MongoDB as a dashboard tool which is in continuous work from more than 1 year.

    This tool shows real-time status of the different panels of the application. Any time if a panel goes down due to any back-end or front-end failure an email triggers with the application screenshot and a proper error message. This dashboard is monitoring application 24X7 and capable enough to calculate up-time of individual web panel and complete application.

    Here Selenium Webdriver crawls the production application and gather browser network tab information and submit it to the dashboard.

  • Added to My Schedule
    keyboard_arrow_down
    Dmitry Vinnik

    Dmitry Vinnik - Mobile Visual Testing: Uphill Battle Of Mobile Visual Regression

    schedule  04:00 - 04:45 PM IST place Lalit 1 people 52 Interested star_halfRate

    There are many types of testing companies need to perform in order to have confidence in their product: security testing, integration testing, system testing, performance testing, and more. Often, Mobile Developers focus on ensuring that main End-to-End flows of their applications work by relying on frameworks like Appium, or Robotium. However, in the Mobile domain, Visual Testing is essential as mobile devices differ drastically in capabilities, display dimensions, and even operating systems.

    Visual regression testing targets specific areas of visual concepts like layouts, responsive design, graphics, and CSS. Because modern mobile applications are built as hybrid and native applications, there is no way to scale this sort of testing using manual resources; hence, Visual test automation should be a crucial piece of the testing stack.

    In this talk, the audience will learn about major Visual Testing Frameworks targeting both responsive web applications, and native mobile applications.

  • Added to My Schedule
    keyboard_arrow_down
    Lavanya Mohan

    Lavanya Mohan / Priyank Shah - Your own MAD Lab for Mobile Test Automation

    schedule  04:00 - 04:45 PM IST place Lalit 3 people 41 Interested star_halfRate

    Why would you want to build your own mobile device lab for test automation? Isn’t it difficult to maintain and expensive? Yes it is! But we (Anand Bagmar, Priyank Shah and Lavanya Mohan) still had to. And this one time setup activity had a huge ROI.

    From our experience in working for an OTT (over the top entertainment) content rich product that has presence in various regions with a large customer base, we have had learnings of how to build quality in-house and making testing repeatable.

    We will cover the reason we chose to build the Mobile Automation Devices lab (MAD Lab) in-house, how we chose the devices and OS combinations, our experiments and learnings, etc.

    More details about MAD-LAB can be found in blogposts by Anand:
    https://essenceoftesting.blogspot.com/search/label/madlab

17:45

    Closing Talk - 15 mins

help