Conference Time
Local Time

Selenium Conf 2015 Day 1

Wed, Sep 9
Timezone: Asia/Kolkata (IST)
09:40
10:30

    Tea/Coffee Break - 15 mins

10:45
11:35
  • Added to My Schedule
    keyboard_arrow_down
    Oren Rubin

    Oren Rubin - Why Building Record/Playback Tools Is So Hard

    schedule  11:35 AM - 12:20 PM IST place Grand Ballroom 1 star_halfRate

    Almost every manual QA started with Selenium IDE or at least tried it at some point.. retention rate is close to 0.

    In this talk we will depict the challenges that Test Automation Developers face. We'll categorise the challenges, name them, and see the skillset required to overcome the difficulties.

    We'll compare the different Record/Playback tools and see where they excel, and where we must resort to traditional solutions (e.g. generic programming languages, and the Page Object design).

    This talk will discuss topics such as Reuse of code, Component Isolation, Separation of Concerns, APIs, where test fragility stem from and how to overcome it, etc..

  • schedule  11:35 AM - 12:20 PM IST place Pavilion East/West star_halfRate

    The key objectives of organizations is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!

    In order for these organizations to to understand the quality / health of their products at a quick glance, typically a team of people scramble to collate and collect the information manually needed to get a sense of quality about the products they support. All this is done manually.

    So in the fast moving environment, where CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury, how can teams take decisions if the product is ready to be deployed to the next environment or not?

    Test Automation across all layers of the Test Pyramid (be it Selenium-based UI tests, or, xUnit based unit tests, or, Performance Tests, etc.) is one of the first building blocks to ensure the team gets quick feedback into the health of the product-under-test. 

    The next set of questions are:
        •    How can you collate this information in a meaningful fashion to determine - yes, my code is ready to be promoted from one environment to the next?
        •    How can you know if the product is ready to go 'live'?
        •    What is the health of you product portfolio at any point in time?
        •    Can you identify patterns and do quick analysis of the test results to help in root-cause-analysis for issues that have happened over a period of time in making better decisions to better the quality of your product(s)?

    The current set of tools are limited and fail to give the holistic picture of quality and health, across the life-cycle of the products.

    The solution - TTA - Test Trend Analyzer

    TTA is an open source product that becomes the source of information to give you real-time and visual insights into the health of the product portfolio using the Test Automation results, in form of Trends, Comparative Analysis, Failure Analysis and Functional Performance Benchmarking. This allows teams to take decisions on the product deployment to the next level using actual data points, instead of 'gut-feel' based decisions.

12:20

    Lunch - 70 mins

13:30
  • schedule  01:30 - 02:15 PM IST place Grand Ballroom 1 star_halfRate
    • `docker-selenium` project is about packaging selenium grid as docker containers (https://github.com/seleniumhq/docker-selenium)
      To me this means I don't have to build any selenium infrastructure machines. I just run the provided images by docker-selenium project (https://hub.docker.com/r/selenium/).
    • I don't have to install selenium jar, java, browsers and other runtime dependencies. They are already built in a docker image and I can just run them as either selenium grid with hub and nodes or as standalone selenium on any docker engine enabled vm. 

    In this talk/demo/case study I will show you how you can use `docker-selenium` project to build several pipelines starting from running on your local dev box to public cloud for quick tests and finally to a stable private cloud for your team.

     

  • schedule  01:30 - 02:15 PM IST place Pavilion East/West star_halfRate

    The JSON Wire Protocol (JSONWP) is the version of the WebDriver spec currently implemented by all the Selenium clients. It defines an HTTP API that models the basic objects of web automation---sessions, elements, etc... The JSON Wire Protocol is the magic that powers Selenium's client/server architecture, enables services like Selenium Grid or Sauce Labs to work, and gives you the ability to write your test scripts in any language.

    The JSONWP has served Selenium faithfully for a number of years, but the future of automated testing lies beyond the borders of the web browser. Mobile automation is an essential ingredient in any build, and tools like Appium or Selendroid have made it possible to run tests against mobile apps using the JSONWP. The JSONWP's current incarnation isn't enough to automate all the new behaviors that mobile apps support, however. Complex gestures, multiple device orientations, airplane mode, and the ability to use both native and web contexts, for example, are all essential to mobile automation.

    For this reason the leaders of the Selenium project, in concert with other Selenium-based projects like Appium and Selendroid, met to discuss the future of the JSONWP. We've been working on its next version, called the "Mobile JSON Wire Protocol" (MJSONWP). Appium and Selendroid already implement much of the MJSONWP spec. In this talk I'll dive into the specifics of the MJSONWP extensions, how they relate to the original JSONWP, and how the Selenium clients have begun to implement them.

    Finally, I will talk about the future of the MJSONWP and how it's related to the current and future versions of the WebDriver spec. I'll share how you can get help with the creation of the MJSONWP, and discuss issues with the authors of the new spec before the API is set in stone. We need the help of everyone who's involved in mobile automation to come up with the best and most future-proof version of the MJSONWP. Ultimately, your understanding of how Selenium works will be improved, and you'll have a much better handle on how projects like Appium and Selenium work together to make sure you have the best automation methods available.

14:20
15:05

    Tea/Coffee Break - 25 mins

15:30
16:20
17:10

Selenium Conf 2015 Day 2

Thu, Sep 10
09:40
10:30

    Tea/Coffee Break - 15 mins

10:45
  • Added to My Schedule
    keyboard_arrow_down
    Rémi

    Rémi - Mobile end to end testing at scale: stable, useful, easy. Pick three.

    schedule  10:45 - 11:30 AM IST place Grand Ballroom 1 star_halfRate

    This talk is about how Facebook turned a great idea with a terrible track record into a great tool for thousands of developers.

    The promise of E2E testing — complex, real-world test scenarios from the point of view of and end user — is appealing.
    Many attempts have been made over the years at automating large parts of companies' and developers' testing and release processes, yet most of these efforts ended up in bitter and hard learned lessons about the inherent challenges of the whole approach.

    My work at Facebook over the last two years has been making mobile end to end testing at scale a reality.
    When others said it couldn't be done, or fell by the wayside, we relentlessly pushed forward, solving problems deemed intractable, and finding new, untold vistas of horror before us

    We've come a long way: E2E testing is now an integral part of Facebook's mobile development and release process.
    We'll cover what challenges we faced, and how we chose to solve or make them irrelevant.

  • Added to My Schedule
    keyboard_arrow_down
    Adam Carmi

    Adam Carmi - Advanced Automated Visual Testing With Selenium

    schedule  10:45 - 11:30 AM IST place Pavilion East/West star_halfRate

    Automated visual testing is a major emerging trend in the dev / test community. In this talk you will learn what visual testing is and why it should be automated. We will take a deep dive into some of the technological challenges involved with visual test automation and show how modern tools address them. We will review available Selenium-based open-source and commercial visual testing tools, demo cutting edge technologies that enable running cross browser and cross device visual tests at large scale, and show how visual test automation fits in the development / deployment lifecycle.

    If you don't know what visual testing is, if you think that Sikuli is a visual test automation tool, if you are already automating your visual tests and want to learn more on what else is out there, if you are on your way to implement Continuous Deployment or just interested in seeing how cool image processing algorithms can be, this talk is for you!

11:35
  • schedule  11:35 AM - 12:20 PM IST place Grand Ballroom 1 star_halfRate

    Appium, often dubbed "Selenium for mobile", at heart its a web server written in NodeJs. Its architecture is modular, which means that it is composed of many small, independently maintained and tested modules. Testing Appium is challenging, but clearly very important, since thousands of users depend on it for their testing. Appium also has all the usual challenges of a large open source project, for example, ensuring consistency of JavaScript code style across hundreds of contributors. It's important to have high-quality and readable code.

     
    I will be discussing approaches to and strategies for testing these kinds of large, modular applications. On the Appium team, we use a combination of unit, functional, and integration tests. Modern services like GitHub, Travis CI, and Sauce Labs make it possible for large open source projects to be tested thoroughly, keeping the code and the app at high quality. I will also discuss the use of tools like JSLint and Gulp, which help prevent code style issues.
     
    Testing the tool which is used for testing is clearly very important. This talk aims to showcase how testing should be approached for large, modular projects which has many collaborators.
  • schedule  11:35 AM - 12:20 PM IST place Pavilion East/West star_halfRate

    If a test fails in the woods and no one is there to see it does anyone care, does anyone even notice. What happens when failing tests become the norm and you can't see the wood from the trees? 

     

    After watching last years Allure Report presentation I was inspired.  Selenium tests (and automation tests in general) are often poorly understood by the team as a whole.  Reports/emails go unread with tests failing becoming an expected outcome rather than a glaring red flag.  We looked at what Allure brought to the table and from that base created a dashboard which was designed to:

    • Display the results of test runs in a way that was useful to managers, testers and the rest of the development team.  Including tools to filter out specific test runs and view the overall trend of the test run results.
    • Make debugging tests easier by grouping errors, displaying history of test results, filtering tests and offering visual comparison of test runs.
    • Help mitigate the problems flaky tests cause with test run result reporting (say that three times fast).
    • Help with our mobile device certification process, by easily providing a view to compare test runs across devices.

    Since it's creation the dashboard has been used and praised by managers through to developers.  With our full suite of tests from unit to integration to selenium and appium being stored on the dashboard.  We've managed to:

    • Decrease the time taken to debug test cases.
    • Increase the visibility of all our test suites, with managers having a better idea of how our selenium test suite is progressing and testers better understanding the coverage of unit tests.
    • Focus the organization on quality.

    We are working with legal at present to have this project open sourced and available to all prior to Selenium Conf 2015.

12:20

    Lunch - 70 mins

13:30
  • schedule  01:30 - 02:15 PM IST place Grand Ballroom 1 star_halfRate

    Selenium Conference 2014 in Bangalore was the first time got a chance to attend this event. During the event got exposed to lots of brilliant ideas and experiences shared in various talks.

    In last one year had opportunity to try some of them which resonated within Automation Community of our organisation. As part of this talk would like to share our experiences, which might help participants to get it done right first time.

    A. Getting the right test pyramid: The idea of having Unit Tests, Integration Tests and GUI Tests in right proportion makes perfect sense. In this part of talk I would like to take you through our efforts to beef up the Integration Tests. It was a two pronged strategy. First automated backend api tests using BDD and some python modules.
    Second added integration tests for frontend modules done using ReactJS using Chai, Mocha, Sinon, React TestUtils, PhantomJS etc.

    B. Appium for Mobile Automation: Our test infrastructure didn't have capability to support Mobile Automation. To support the Mobile First approach catching up and requirement of rich user experience on mobile for our service we needed Appium.

    In this part of talk will share our experiences enabling existing tests to run on Mobile devices using Appium and with iOS/Android emulators, real devices, SauceLabs.

    C. Using third party infrastructure service instead of local grid:  Idea of not maintaining the test infrastructure and growing number of tests on web and mobile made us look in this direction. We chose SauceLabs as the preferred infrastructure service. This part of talk will cover how we went about trying SauceLabs, challenges we have faced and some pros and cons of using SauceLabs.

  • Added to My Schedule
    keyboard_arrow_down
    rajesh sarangapani

    rajesh sarangapani / Prabhu Epuri - Visualizing Real User Experience Using Integrated Open Source Stack (Selenium + Jmeter + Appium + Visualization tools)

    schedule  01:30 - 02:15 PM IST place Pavilion East/West star_halfRate

    Traditional approach in performance testing does not include client side processing time (i.e. DOM Content Load, Page Render, JavaScript Execution, etc.) as part of response times, performance tests has always been conducted to stress the server so tools like Jmeter have been very popular to execute tests. With increasing complexity of architectures (Web, Browser, Mobile) on the client side it has been important to understand the real user experience.   Commercial tools have started to provide features that can provide insights into real user experience after the bytes are transferred to the client end.  With the ability to call Selenium scripts via Jmeter the ability to conduct real user experience tests using open source stack has opened up new avenues to comment on real user experience.   This enables us to comment on

    • Provides Page load times similar to On Load time of real browsers
    • Generates HAR file with following statistics
    • Details of summary of request times and content types
    • Waterfall chart with page download time breakdown statistics such as  DNS resolution time, Connection time, SSL handshaking time, Request send time, wait time and receive time.

    By integrating the open source stack tools it enables us to provide the same insights which a commercial of the shelf tools would offer.   At Gallop we have implemented this at multiple clients providing them insights into various bottlenecks at the client side which helped us to provide greater value proposition

14:20
  • Added to My Schedule
    keyboard_arrow_down
    Priyanka Gupta

    Priyanka Gupta / Sarah Eisen - Automation Alchemy On a Mass Scale: Turning Costly Manual Tests Into Automation Gold

    schedule  02:20 - 03:05 PM IST place Grand Ballroom 1 star_halfRate

    Do you want to hear a story about overcoming obstacles and achieving seemingly unattainable goals at a massive scale? Well, we have one to tell - it’s a true story, and like all good stories, teaches us some valuable lessons. We have gone through the ups and downs of this tale and come out better and smarter. We would love to share those experiences and learning with everyone.

    The story starts with a mission...automate 5000 hours of manual tests for our enterprise product. Like many other product based companies, we had one big monolithic application to test. The mission was to be accomplished with the resources available - no new magical dream team, we had to work with the resources we had - QA analysts with no technical background, a very small automation team, and a huge offshore manual testing group. Go figure! There was another twist - we had to accomplish our mission without dropping the current level of support for testing our enterprise application, including regression and new feature tests. Doesn't it all sound very familiar?

     

    This presentation will cover all aspects of our journey from the beginning to the end. We went through a lot of ups and down, and every single decision we made taught us a great deal. It is those experiences that we want to share with everyone.

    • We created a tool that wrapped the Selenium API in order to make it easy for non developers to write tests. The tests were written in a Domain Specific Language that made Selenium API calls with some application specific logic added in.
    • We needed to build our own execution framework to support our growing automated test base. The framework offered many customized features and was able to sustain 60,000 hours of tests running every single day.
    • We wrote our own best practices and worked closely with the QA team to make sure everyone wrote high quality tests.
    • The results from the tests needed to be displayed in a way that made sense. We created several different dashboards for that purpose and had many different views of the test suite performance, including a heat map to highlight problem areas.
    • Elasticsearch and Kibana were instrumental in helping us parse through the massive volume of test results and make sense of them, giving us metrics in different forms.
    • Daily environment setup for this execution was also massive - 100 or so slaves and several SUTs for every codeline, with support for 3 codelines meant that we needed a big lab setup.


    We successfully completed the mission of automating the manual test behemoth and gained a rich understanding of test automation at scale along the way.

  • Added to My Schedule
    keyboard_arrow_down
    Selena Phillips

    Selena Phillips - UXD Legos AKA How To Design and Build the Page Object API of Your Dreams

    schedule  02:20 - 03:05 PM IST place Pavilion East/West star_halfRate

    Wouldn't it be great if building a page object was basically like putting together a bunch of legos? What if all you had to do is choose components from a API library of ready to use models of menus, paginators, chooser dialogs, data tables and so on? We could all stop reinventing the wheel over and over again. Every component of a given type would have the same basic interaction interface, resulting in a consistent look and feel for all your page objects. And, because the generic abstractions can be thoroughly unit tested and performance tuned, you can spend more time on testing your application in new and exciting ways rather than debugging routine interactions with that menu, that dialog, or that paginator.

    Wouldn't it be even better if that page object API had an interface for dynamically specifying a configuration depending on the browser type, browser version and operating system so you wouldn't have to write a test setup script for each scenario? Your page object would know, for instance, to click that pesky button using a Javascript workaround for Firefox 31 on Windows 7 because WebElement.click() fails silently for just that combination of environmental factors. Your page object would also know that it needs twice as much time to load on Internet Explorer,  and all you have to do is specify the configuration profile id for your test scenario. We've all spent more time trying to work around environmental snafus instead of finding bonafide application defects than we'd like.

    What if you run into a variation on one of the generic models that is common enough that it should be another option in the API library? It would be great if the API had a cookie cutter approach to adding that new component, wouldn't it?

    That cookie cutter process is as follows:

    1) Identify the basic interface and interaction model for the component

    2) Identify the minimal state that must be specified in order to construct the component

    3) Identify the component's place in the inheritance hierarchy of other abstract components

    4) Develop Java interfaces to define the contract for the component type, the contract for a state bean to specify the state necessary to construct it and the contract for a fluent builder to build and instantiate the component

     

     

     

     

     

15:05

    Beer Tasting - 55 mins

16:00
16:50
17:40

    Selenium Committers Q&A Panel - 45 mins

help