Appium, often dubbed "Selenium for mobile", at heart its a web server written in NodeJs. Its architecture is modular, which means that it is composed of many small, independently maintained and tested modules. Testing Appium is challenging, but clearly very important, since thousands of users depend on it for their testing. Appium also has all the usual challenges of a large open source project, for example, ensuring consistency of JavaScript code style across hundreds of contributors. It's important to have high-quality and readable code.

 
I will be discussing approaches to and strategies for testing these kinds of large, modular applications. On the Appium team, we use a combination of unit, functional, and integration tests. Modern services like GitHub, Travis CI, and Sauce Labs make it possible for large open source projects to be tested thoroughly, keeping the code and the app at high quality. I will also discuss the use of tools like JSLint and Gulp, which help prevent code style issues.
 
Testing the tool which is used for testing is clearly very important. This talk aims to showcase how testing should be approached for large, modular projects which has many collaborators.
 
30 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/structure of the Session

  • Introduction
  • Modular architecture of Appium (briefly) 
  • Challenges and solutions for testing Appium
  • Live demo 
  • Question and answers

Learning Outcome

Attendees will have an opportunity to learn various libraries and developer tools which can be used for testing large, modular project. 

Target Audience

Anyone who is interested in knowing how to test large, modular software

schedule Submitted 1 year ago

Comments Subscribe to Comments

comment Comment on this Proposal

  • Anand Bagmar
    Anand Bagmar
    schedule 1 year ago
    Sold Out!
    45 mins
    Demonstration
    Intermediate

    The key objectives of organizations is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!

    In order for these organizations to to understand the quality / health of their products at a quick glance, typically a team of people scramble to collate and collect the information manually needed to get a sense of quality about the products they support. All this is done manually.

    So in the fast moving environment, where CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury, how can teams take decisions if the product is ready to be deployed to the next environment or not?

    Test Automation across all layers of the Test Pyramid (be it Selenium-based UI tests, or, xUnit based unit tests, or, Performance Tests, etc.) is one of the first building blocks to ensure the team gets quick feedback into the health of the product-under-test. 

    The next set of questions are:
        •    How can you collate this information in a meaningful fashion to determine - yes, my code is ready to be promoted from one environment to the next?
        •    How can you know if the product is ready to go 'live'?
        •    What is the health of you product portfolio at any point in time?
        •    Can you identify patterns and do quick analysis of the test results to help in root-cause-analysis for issues that have happened over a period of time in making better decisions to better the quality of your product(s)?

    The current set of tools are limited and fail to give the holistic picture of quality and health, across the life-cycle of the products.

    The solution - TTA - Test Trend Analyzer

    TTA is an open source product that becomes the source of information to give you real-time and visual insights into the health of the product portfolio using the Test Automation results, in form of Trends, Comparative Analysis, Failure Analysis and Functional Performance Benchmarking. This allows teams to take decisions on the product deployment to the next level using actual data points, instead of 'gut-feel' based decisions.

  • Liked Priyanka Gupta
    keyboard_arrow_down

    Automation Alchemy On a Mass Scale: Turning Costly Manual Tests Into Automation Gold

    Priyanka Gupta
    Priyanka Gupta
    Sarah Eisen
    Sarah Eisen
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Beginner

    Do you want to hear a story about overcoming obstacles and achieving seemingly unattainable goals at a massive scale? Well, we have one to tell - it’s a true story, and like all good stories, teaches us some valuable lessons. We have gone through the ups and downs of this tale and come out better and smarter. We would love to share those experiences and learning with everyone.

    The story starts with a mission...automate 5000 hours of manual tests for our enterprise product. Like many other product based companies, we had one big monolithic application to test. The mission was to be accomplished with the resources available - no new magical dream team, we had to work with the resources we had - QA analysts with no technical background, a very small automation team, and a huge offshore manual testing group. Go figure! There was another twist - we had to accomplish our mission without dropping the current level of support for testing our enterprise application, including regression and new feature tests. Doesn't it all sound very familiar?

     

    This presentation will cover all aspects of our journey from the beginning to the end. We went through a lot of ups and down, and every single decision we made taught us a great deal. It is those experiences that we want to share with everyone.

    • We created a tool that wrapped the Selenium API in order to make it easy for non developers to write tests. The tests were written in a Domain Specific Language that made Selenium API calls with some application specific logic added in.
    • We needed to build our own execution framework to support our growing automated test base. The framework offered many customized features and was able to sustain 60,000 hours of tests running every single day.
    • We wrote our own best practices and worked closely with the QA team to make sure everyone wrote high quality tests.
    • The results from the tests needed to be displayed in a way that made sense. We created several different dashboards for that purpose and had many different views of the test suite performance, including a heat map to highlight problem areas.
    • Elasticsearch and Kibana were instrumental in helping us parse through the massive volume of test results and make sense of them, giving us metrics in different forms.
    • Daily environment setup for this execution was also massive - 100 or so slaves and several SUTs for every codeline, with support for 3 codelines meant that we needed a big lab setup.


    We successfully completed the mission of automating the manual test behemoth and gained a rich understanding of test automation at scale along the way.

  • Liked Rémi
    keyboard_arrow_down

    Mobile end to end testing at scale: stable, useful, easy. Pick three.

    Rémi
    Rémi
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Beginner

    This talk is about how Facebook turned a great idea with a terrible track record into a great tool for thousands of developers.

    The promise of E2E testing — complex, real-world test scenarios from the point of view of and end user — is appealing.
    Many attempts have been made over the years at automating large parts of companies' and developers' testing and release processes, yet most of these efforts ended up in bitter and hard learned lessons about the inherent challenges of the whole approach.

    My work at Facebook over the last two years has been making mobile end to end testing at scale a reality.
    When others said it couldn't be done, or fell by the wayside, we relentlessly pushed forward, solving problems deemed intractable, and finding new, untold vistas of horror before us

    We've come a long way: E2E testing is now an integral part of Facebook's mobile development and release process.
    We'll cover what challenges we faced, and how we chose to solve or make them irrelevant.

  • Liked Jonathan Lipps
    keyboard_arrow_down

    The Mobile JSON Wire Protocol

    Jonathan Lipps
    Jonathan Lipps
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    The JSON Wire Protocol (JSONWP) is the version of the WebDriver spec currently implemented by all the Selenium clients. It defines an HTTP API that models the basic objects of web automation---sessions, elements, etc... The JSON Wire Protocol is the magic that powers Selenium's client/server architecture, enables services like Selenium Grid or Sauce Labs to work, and gives you the ability to write your test scripts in any language.

    The JSONWP has served Selenium faithfully for a number of years, but the future of automated testing lies beyond the borders of the web browser. Mobile automation is an essential ingredient in any build, and tools like Appium or Selendroid have made it possible to run tests against mobile apps using the JSONWP. The JSONWP's current incarnation isn't enough to automate all the new behaviors that mobile apps support, however. Complex gestures, multiple device orientations, airplane mode, and the ability to use both native and web contexts, for example, are all essential to mobile automation.

    For this reason the leaders of the Selenium project, in concert with other Selenium-based projects like Appium and Selendroid, met to discuss the future of the JSONWP. We've been working on its next version, called the "Mobile JSON Wire Protocol" (MJSONWP). Appium and Selendroid already implement much of the MJSONWP spec. In this talk I'll dive into the specifics of the MJSONWP extensions, how they relate to the original JSONWP, and how the Selenium clients have begun to implement them.

    Finally, I will talk about the future of the MJSONWP and how it's related to the current and future versions of the WebDriver spec. I'll share how you can get help with the creation of the MJSONWP, and discuss issues with the authors of the new spec before the API is set in stone. We need the help of everyone who's involved in mobile automation to come up with the best and most future-proof version of the MJSONWP. Ultimately, your understanding of how Selenium works will be improved, and you'll have a much better handle on how projects like Appium and Selenium work together to make sure you have the best automation methods available.

  • 45 mins
    Talk
    Intermediate

    There has been a recent explosion in second-screen technologies such as Chromecast, but designing test automation for second-screen applications is far from straightforward. This new paradigm lacks major automated tool support, and coordinating test execution across multiple devices is tricky and error-prone.

    Our automation solution uses WebdriverJS and WebSockets to perform end-to-end test automation that covers our web player controller and second screen application.

    Learn about our approach to second-screen automation which we’ve used to build a reactive, responsive test suite. We’ll describe our solutions to synchronizing test flow between the controller and target device, validation on the device, targeting different integration components, and device management.

  • James Farrier
    James Farrier
    Xiaoxing Hu
    Xiaoxing Hu
    schedule 1 year ago
    Sold Out!
    45 mins
    Demonstration
    Intermediate

    If a test fails in the woods and no one is there to see it does anyone care, does anyone even notice. What happens when failing tests become the norm and you can't see the wood from the trees? 

     

    After watching last years Allure Report presentation I was inspired.  Selenium tests (and automation tests in general) are often poorly understood by the team as a whole.  Reports/emails go unread with tests failing becoming an expected outcome rather than a glaring red flag.  We looked at what Allure brought to the table and from that base created a dashboard which was designed to:

    • Display the results of test runs in a way that was useful to managers, testers and the rest of the development team.  Including tools to filter out specific test runs and view the overall trend of the test run results.
    • Make debugging tests easier by grouping errors, displaying history of test results, filtering tests and offering visual comparison of test runs.
    • Help mitigate the problems flaky tests cause with test run result reporting (say that three times fast).
    • Help with our mobile device certification process, by easily providing a view to compare test runs across devices.

    Since it's creation the dashboard has been used and praised by managers through to developers.  With our full suite of tests from unit to integration to selenium and appium being stored on the dashboard.  We've managed to:

    • Decrease the time taken to debug test cases.
    • Increase the visibility of all our test suites, with managers having a better idea of how our selenium test suite is progressing and testers better understanding the coverage of unit tests.
    • Focus the organization on quality.

    We are working with legal at present to have this project open sourced and available to all prior to Selenium Conf 2015.

  • Liked Ragavan Ambighananthan
    keyboard_arrow_down

    Distributed Automation Using Selenium Grid / AWS / Autoscaling

    Ragavan Ambighananthan
    Ragavan Ambighananthan
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Advanced

    Speed of UI automation has always been an issue when it comes to Continuous Integration / Continuous Delivery. If UI automation suite takes 3 hours to complete, then any commit happens during this time will not be visible in test environment, because the next deployment will happen only after 3 hours. 

    With 2000+ developers and average 250+ checkins per day, the above issues is replicated 250+ times every day. This is not productive and feedback cycle is super slow!

    Another issue is , with 35+ different project teams using 10 or more different jenkins jobs to run their UI automation. So many jobs means (350+), individual teams need to go through the pain of managing their own jenkins job, its a duplicate effort and waste of time. Automation teams need to spend time on writing reliable automation and not managing jenkins jobs.

    Solution is to reduce the UI automation run time from hours to minutes and also use only handful of jobs to run the Distributed Automation!

    Goal: To run all UI automation scenarios within the time take by the longest test case

  • Liked Sveta Kostinsky
    keyboard_arrow_down

    Selenium Today vs. Selenium Tomorrow: Digital as the Convergence of Mobile & Web Programs

    Sveta Kostinsky
    Sveta Kostinsky
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Beginner

    Today, mobile is increasingly trumping web as the most important brand engagement point; enterprises are moving away from mobile and web projects independent of each other. The rapid adoption of responsive web encourages teams to discover one approach to measuring software quality regardless of form factors.

     

    Selenium is current market leading solution for web testing, but how does it stand with mobile? The truth is that working with Selenium presents a few challenges, including:

    • Building and maintaining an internal structure to support it
    • Bridging an architectural gap
    • Requirements demand support for unattended test execution
    • Lack of real network conditions for mobile testing

     

    There is a solution to address these challenges!

    Let’s work through a demo and show how to test mobile & web in parallel with Selenium

  • Liked rajesh sarangapani
    keyboard_arrow_down

    Visualizing Real User Experience Using Integrated Open Source Stack (Selenium + Jmeter + Appium + Visualization tools)

    rajesh sarangapani
    rajesh sarangapani
    Prabhu Epuri
    Prabhu Epuri
    schedule 1 year ago
    Sold Out!
    45 mins
    Demonstration
    Advanced

    Traditional approach in performance testing does not include client side processing time (i.e. DOM Content Load, Page Render, JavaScript Execution, etc.) as part of response times, performance tests has always been conducted to stress the server so tools like Jmeter have been very popular to execute tests. With increasing complexity of architectures (Web, Browser, Mobile) on the client side it has been important to understand the real user experience.   Commercial tools have started to provide features that can provide insights into real user experience after the bytes are transferred to the client end.  With the ability to call Selenium scripts via Jmeter the ability to conduct real user experience tests using open source stack has opened up new avenues to comment on real user experience.   This enables us to comment on

    • Provides Page load times similar to On Load time of real browsers
    • Generates HAR file with following statistics
    • Details of summary of request times and content types
    • Waterfall chart with page download time breakdown statistics such as  DNS resolution time, Connection time, SSL handshaking time, Request send time, wait time and receive time.

    By integrating the open source stack tools it enables us to provide the same insights which a commercial of the shelf tools would offer.   At Gallop we have implemented this at multiple clients providing them insights into various bottlenecks at the client side which helped us to provide greater value proposition

  • Liked James Eisenhauer
    keyboard_arrow_down

    An Introduction to the World of Node, Javascript & Selenium

    James Eisenhauer
    James Eisenhauer
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Beginner

    Ever wanted to write Selenium code in Node.js?  There seems to be a new javascript library written every hour!  Entering the world of Node.js can be a daunting task.  This session will teach you everything you need to know to make the right decisions when selecting what libraries you should implement on your new Node.js Selenium project and what the possible challenges will be.

     

     

  • Liked Russell Rutledge
    keyboard_arrow_down

    Blazing Fast UI Validation - 5000 Reliable Tests in 10 Minutes on One Machine

    Russell Rutledge
    Russell Rutledge
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Advanced

    A big blocker for putting a website on truly continuous production delivery is the amount of time it take to validate that the site works correctly.  Tests themselves take time to run, and test results are unreliable to the point where it takes a human to investigate and interpret them.  When counting the time that it takes to both run and interpret results, test runs for an enterprise web site can take an entire day from inception to useful result.

    This session describes common points of failure in test execution that add both latency and unreliability and what can be done to overcome them while still preserving the value of UI validation.  We'll discuss why, after addressing these concerns, UI can be unblocked to reliably field thousands of validation scenarios on a local machine in a matter of minutes. 

  • Liked David Giffin
    keyboard_arrow_down

    A Large-scale, Data-driven Company's Journey of Going From Manual to Automated Testing In 6 Months

    David Giffin
    David Giffin
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Beginner

    Manual Testing.  Depending on how you've been influenced by those two simple words, reactions vary from slight disgust to full-on depression.  Of course, the solution is clear: automate, but how do you get there when your company is continually pushing out the next big feature?  As the set of features to cover increases, the lack of scalability of manual testing becomes more apparent.

     

    This is a problem that we struggled with at our company.  Automation tactics were explored and implemented, but problems persisted as proposed solutions did not cater to the demands of the manual testers.

     

    After years of failure and disappointment, our latest stint resulted in success.  Not only do we have hundreds of automated tests across various platforms (mobile and web) and products, but manual testing has been eliminated with zero casualties.  As we move forward towards Continuous Delivery and improved automation performance, we wanted to take this moment to look back and share stories of failure and success.

  • Liked Titus Fortner
    keyboard_arrow_down

    What Are We Testing, Anyway?

    Titus Fortner
    Titus Fortner
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    Testing strategies and the role of DOM to Database Testing in a world of micro services and client side MVCs.

    The trends in software development are making UI testing increasingly difficult. Sites are leveraging more dynamic interactions and moving toward Single Page Applications. Gone are the days when the term “and the page finishes loading” makes any sense. This shift is dramatically increasing the number of flaky tests as well as the costs of such testing relative to the benefits, leaving many organizations wondering if they are worth doing at all. 

    The approach to testing that is “good enough” for any given organization is going to vary by context. In this talk, I’ll cover some different testing options and the advantages and disadvantages to each. We’ll discuss the dangers of mocking and stubbing, the problems with relying on testing journeys, and dealing with bloated test suites that are difficult to maintain.

    Another trend in software development is away from monolithic architectures and toward micro services and service oriented architectures. This approach provides opportunities for decreasing the costs and overhead of UI testing while still maintaining all of the benefits of DOM to Database verification.

  • Liked Oren Rubin
    keyboard_arrow_down

    Selenium Wat!

    Oren Rubin
    Oren Rubin
    schedule 1 year ago
    Sold Out!
    45 mins
    Case Study
    Intermediate

    Every language and framework which lives as long as Selenium has its fuckups.. and we're here to embrance them and joke about them.

    E.g. JS Wat https://www.youtube.com/watch?v=FqhZZNUyVFM

  • Liked Justin Ison
    keyboard_arrow_down

    Android Mobile Device Grid & CI - Getting Started

    Justin Ison
    Justin Ison
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    In the modern era, we have many different cloud testing services to choose from. These cloud services are useful and help reduce the burden of building and maintaining your own Selenium Grid environment. However, there are many scenarios in which you need your tests running locally and quickly, such as you work for the government (or agency), you have sensitive software/data you cannot expose to the cloud, or service costs are too expensive for your organization.

    This presentation will feature getting started with setting up your own mobile device grid, running your tests in parallel, running in CI (Jenkins), and the lessons I have learned along the way.

     

  • Surendran Ethiraj
    Surendran Ethiraj
    schedule 1 year ago
    Sold Out!
    45 mins
    Demonstration
    Intermediate

    The evolution of Test Automation started with automation tools that had record and playback features.  This allowed Automation Testers to record and structure the script in such a way that it could be reused. Tools like Selenium, which provided APIs, could interact with different browsers. The Automation Testers could use these APIs to interact with web applications. Additionally, it was possible to develop frameworks for reusing each of the components of the framework. Currently, the focus has shifted more and more towards the designing of frameworks rather than just the tools, so that the testing framework could be integrated with test management applications and continuous integration tools to aid test-driven development.

    With that background, we have come up with certain Value Added Services (VAS), a step ahead of developing functional automation scripts. Imagine creating an Automation Framework which will not just check for the functionality of the application, but also check for security, page performance, page layout, accessibility and have an output that can be a trigger to other aspects of testing.

    This paper presents three of the Value Added Services that we offer. We are working on creating many more such services on top of the Automation Framework.

  • Andrew Krug
    Andrew Krug
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    Responsive Website Design have enabled mobile phones and tablets to fundamentally changed how we interact with the internet. Now we have instant access to any website we choose to visit and this causes headaches for testers, especially automated testers.

    This changes how automation, specifically Selenium, is implemented as the test suite needs to be maintainable, which is difficult and will get unruly if not maintained.

    The talk will be specifically about responsive websites however the same techniques can be applied to native app testing.

    Utilizing a test case generator allows for the test conditions(browser, OS and resolution) to exist outside of the test itself allows a single test to be able to test against all testing combinations without having to code for the other options explicitly. With the different options outside of the test the driver is easily instantiated and the browser windows are modified prior to test execution.

    As a sole(or two man team) automated test engineer for 4 years in over 5 projects these are my tools and techniques on how to make your automated test suite not only maintainable but adaptable to any device that you need to test with minimal overhead.

  • Liked Jason Watt
    keyboard_arrow_down

    Challenges of the Mobile Cloud

    Jason Watt
    Jason Watt
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    Creating a mobile app is now the new cross platform problem. The major mobile platforms tend to gear their development tool chain towards individuals and their workstations.  But what if you want to introduce a CI solution to this environment? What if your app is launching on more than one platform and there's a team of 20+ developers working on it? What if your tests are more than just Selenium based?

    This is normally where you can look to the cloud for scale but mobile has a ton of challenges to do so.  Come and learn from some of the challenges and pitfalls I've encountered while working towards this goal.

  • Liked Mike Levin
    keyboard_arrow_down

    Selenium meta hub – scalable and redundant infrastructure

    Mike Levin
    Mike Levin
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Intermediate
     
    Selenium grid is widely used in lots of companies and projects. Unfortunately with the current open source implementation one can not run more than one hub which can cause various problems due to hardware or network instability. Single hub architecture is also hardly scalable, at least it requires hardware upgrades for the hub.
    At the same time many teams implement their own internal solutions which are usually not shared because of the team/organization specific or custom selenium hacks.
    In Yandex we have multi hub solution for more than 5 years. At the same time we are doing our best to avoid making custom internal patches to selenium.
    During these years we were using client side balancing approach: client applications were always obtaining a browser via a special internal library. It knew the configuration of all the hubs and browsers and was performing the search for an available node on request.
    But when it came to the different test frameworks, different languages and different runtimes this approach became difficult to support. As long as test practices move from test engineers to development teams the diversity of frameworks and runtimes increases. So we come up with meta hub solution.
     
    Our meta hub solution has the following basis:
    • Stock versions of selenium and selenium grid
    • Stock web driver interface for the client
    • Virtual infrastructure. We use Openstack for all parts of our infrastructure. It’s not necessary but It makes sense.
    • Fixed load for each hub and node – scalability via adding new hubs with fixed volume.
    • Redundancy and scalability
    • Stateless solution for meta hub. No storage requires to keep state between several meta hub nodes
     
    We made solution that includes:
    • Proxy software between client and multi-hub grid installation
    • Some configuration adjustments for hubs/nodes.
     
    I'll talk about our solution. We going to go open source on the SeleniumConf conference or earlier.

     

  • Liked Aaron Evans
    keyboard_arrow_down

    Looking ahead: testing responsive, mobile, and native apps with Selenium

    Aaron Evans
    Aaron Evans
    schedule 2 years ago
    Sold Out!
    45 mins
    Talk
    Intermediate

    Selenium made test automation easier and affordable for many software development teams, but it had many limitations.  It was limited to DOM manipulation in the browser, it depended on explicit waits.  

    Webdriver helped overcome some of these deficiencies and took Selenium to the next level.  Other extensions like Appium have enabled us to use the familiar Selenium API for testing mobile apps.  Proprietary frameworks allow you to integrate Selenium with native extensions and ALM tools.  But a new category of apps is coming with responsive UIs, rich client side Javascript frameworks, touch screens (with pinch/zoom, swipe, rotation, etc) and interact with native device features (such as GPS, accelerometer, local storage) and apps are becoming a collection of interactive services.  

    Is Selenium becoming outdated?  What can we do to keep up with these new interfaces and architectures?

    In this talk, we'll discuss some of the challenges and limitations facing testers using Selenium with this new generation of apps.  We'll cover some of the solutions people are using today, and propose a new way to address these issues and others going forward.