Cross-platform, Multi-device Instant Communication Testing in Parallel using Appium and Docker

Today over 100 million users share over 40 billion messages per month on Hike. Its not just simple 1:1 chat messages. Users can do a VoIP call or share rich multi-media content in 8 different languages in group chats with hundreds of members. User can transfer large (upto 100 MB) file using Wifi-Direct .i.e. device to device file transfer without using Internet. And many more features. How do you ensure that you can roll out a release every month without breaking any of these features?

With such a large user based, which is very sensitive to frequent upgrades due to data consumption cost, rigorously testing the app becomes extremely critical.

When we started our automation journey in 2014, we were looking for a device lab which can simplify our testing effort. However we gave up and ended up building our own setup. The reason being, we require multiple devices that can communicate with each other for a single test. And we have 6000+ such tests, which we want to run in parallel. While many device labs allow you to run tests in parallel, they don't allow the devices to communicate with each other. Also its not possible to run the same test across multiple devices. Imagine testing a group-chat flow with photo sharing or imagine the device to device file transfer using hotspot. How would you test these features?

 

If this interests you, join us and we'll share our learning trying to achieve this at Hike.

 
6 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/structure of the Session

1. Challenge with Instant Messaging app testing on cross-platform OS.
2. Driving multi-device communication with docker. (Why not KVN?).
3. Scheduling of devices and containers according to test cases.
4. Challenge in executing UI, perf, DB, server testing in parallel as part of a SINGLE automation framework.
5. Why is Hike's automation model a notch above the traditional automation frameworks.

Learning Outcome

  1. End-to-End testing on cross-platforms across multiple devices.
  2. Driving different testing aspects in parallel.
  3. How using Docker you can avoid the overhead of spawning one VM (heavy weight & time consuming) per device  

Target Audience

Automation QA , QA Leads ,QA Managers

schedule Submitted 1 year ago

Comments Subscribe to Comments

comment Comment on this Proposal

  • Liked Naresh Jain
    keyboard_arrow_down

    Q & A with the Selenium Committee

    Naresh Jain
    Naresh Jain
    Founder
    ConfEngine.com
    schedule 1 year ago
    Sold Out!
    45 mins
    Keynote
    Intermediate

    Q & A with the Selenium Committee

  • Liked Simon Stewart
    keyboard_arrow_down

    Selenium: State of the Union

    Simon Stewart
    Simon Stewart
    WebDriver Creator
    Facebook
    schedule 1 year ago
    Sold Out!
    45 mins
    Keynote
    Intermediate

    Selenium: State of the Union

  • Priti Biyani
    Priti Biyani
    Consultant
    ThoughtWorks
    schedule 1 year ago
    Sold Out!
    90 mins
    Case Study
    Intermediate

    These days we find most of the apps are being developed across different platform, iOS, android, windows and to keep the user base which uses web, mobile web and websites. 

    When apps are being developed for cross domains, most of the functionality provided by the app is very similar, varying thing is PLATFORM. 

     In rapid development cycle, where there are tools which allows you to write once and reuse across multiple platforms, makes development very faster. 
    But at the same time, if we have different automation suite for different platforms, it becomes very difficult to keep a pace with ongoing functionality. 
    This is the exact problem we faced, and the solution we came up with is "One Page to test them all! -A cross platform mobile automation framework! "

     

    Page Object Model

    Well, Page Object Model was again a natural fit for this framework. Most implementations of POM recommend different POMs for each platform. But we wanted to have a single Page Object Model for all the 3 platforms to ensure maximum code reuse and reduce overall time spent in adding new automation.
     

    Single Page Object Model across platforms

    This was complicated because we had native screens as well as webview screens and so it was not possible to use the same Page Object. To solve this, we introduced abstractions for the elements on the screen and encapsulated the respective native driver implementations.

    This also allowed us to implement common automation tasks in one place for e.g waiting for new pages to load, so that this code is not repeated across multiple step definitions and platforms. This helped us move to thinking in higher domain level concepts than in terms of low level UI interactions.

    So, in summary, we write our tests for one platform and run them for all with an abstraction layer in place.

     

     

  • Liked Pooja Shah
    keyboard_arrow_down

    Can we Have it All! {Selenium for web, mobile and Everything what a Product needs}

    Pooja Shah
    Pooja Shah
    Lead Automation Engineer
    MoEngage
    schedule 1 year ago
    Sold Out!
    45 mins
    Experience Report
    Advanced

    Problem Statement

    Expected Result: Mobile is taking over the world and wow! my product works awesomely everywhere.

    Actual Result:  OMG! it breaks on iOS 6 :-( 

    Holy Jesus! did we also test on firefox version 30.0 on Windows machine ?? innocent

    Having an application on all major platforms(Desktop Web, Mobile Web, Mobile Native apps etc.) brings a daunting requirement of verifying every single feature before giving a +1 for release and so it becomes essential for the QA folk to test and provide proper feedback as quickly as possible, which immediately takes the complete reliance only on manual testing out of the question and pushes for the need for automated testing with scalable automation framework embracing any product need in the future.

    We surely have 5 points to be answered before we think about such solution :

    1. Do we have a single test code which can test the product everywhere with a simple mechanism to trigger and manage them?
    2. Where is the plan to reduce Time To market having so many tests running before each code push?
    3. Do we have 1 click solution to monitor all the test results in one go to assert the state of ThumbsUp for release?
    4. Is continuos integration in place?
    5. How can I integrate all of the above 4 points using the same beautiful tool Selenium along with other aligned open-source projects like Appium, Shell and Jenkins?
  • Liked Marcus Merrell
    keyboard_arrow_down

    Automated Analytics Testing with Open Source Tools

    45 mins
    Talk
    Intermediate

    Analytics are an increasingly important capability of any large web site or application. When a user selects an option or clicks a button, dozens—if not hundreds—of behavior-defining “beacons” fire off into a black box of “big data” to be correlated with the usage patterns of thousands of other users. In the end, all these little data points form a constellation of information your organization will use to determine its course. But what if it doesn’t work? A misconfigured site option or an errant variable might seem insignificant, but if 10,000 users are firing 10,000 incorrect values concerning their click patterns, it suddenly becomes a problem for the QA department―a department which is often left out of conversations involving analytics.

    Join Marcus Merrell to learn how analytics work, how to get involved early, and how to integrate analytics testing into the normal QA process, using Selenium and other open source tools, to prevent those misfires from slipping through.

  • Liked Dakshinamurthy Karra
    keyboard_arrow_down

    Java Swing, Java FX application testing using Selenium WebDriver

    45 mins
    Demonstration
    Intermediate

    Marathon is a open source test automation suite for Java Swing and Java/FX applications. Marathon provides Selenium/WebDriver bindings for executing test scripts agains Java application.

    In this workshop we explore steps by which you can set up an environment for testing a Java/Swing application.

  • Liked Bret Pettichord
    keyboard_arrow_down

    Checking as a Service

    Bret Pettichord
    Bret Pettichord
    Software Architect
    HomeAway
    schedule 1 year ago
    Sold Out!
    45 mins
    Keynote
    Beginner

    This talk suggests a reframe in how we understand the business value of automated testing. One shift is to see automation as "checking" rather than "testing". Another is the shift from software delivery to service delivery, including fully embracing DevOps. The resulting approach could be called Checking as a Service or CheckOps, and forces us to rethink traditional automation priorities. In this talk, Bret will explain how change in approach has affected teams he's worked with and how you can use it to improve your ability to deliver valued services.

  • Adam Carmi
    Adam Carmi
    Co-Founder and VP R&D
    Applitools
    schedule 1 year ago
    Sold Out!
    45 mins
    Talk
    Beginner

    Automated visual testing is a major emerging trend in the dev / test community. In this talk you will learn what visual testing is and why it should be automated. We will take a deep dive into some of the technological challenges involved with visual test automation and show how modern tools address them. We will review available Selenium-based open-source and commercial visual testing tools, demo cutting edge technologies that enable running cross browser and cross device visual tests at large scale, and show how visual test automation fits in the development / deployment lifecycle.

    If you don’t know what visual testing is, if you think that Sikuli is a visual test automation tool, if you are already automating your visual tests and want to learn more on what else is out there, if you are on your way to implement Continuous Deployment or just interested in seeing how cool image processing algorithms can be, this talk is for you!

  • Liked Kumar Pratyush
    keyboard_arrow_down

    Performance Testing a Mobile App Used by 100M Users

    45 mins
    Case Study
    Intermediate

    Hike is used by 100 Million users and many of our users have cheap smart phone (~ $120 USD) that can install no more than 3 mobile apps.

    So the questions is: Should testing of app be limited to its functionality? At Hike, we believe "Performance is Queen!" For our users, if we misuse the critical resources such as Battery, CPU, Network and Memory, its a deal-breaker. Hence pref-testing is very important.

    During the initial days of Hike, we were very reactive and only did (manual) perf testing, when our users reported issues.

    Now, every Sprint (2 weeks) and every public release (monthly), we run our automated perf tests. We measure our app's performance using several app specific use-cases on 4 key areas:

    • CPU,
    • Memory,
    • Battery and
    • Network (data consumption.)

    Hike's CPU Utilization

    We also benchmark the following scenarios for app latency:

    • App launch time upon Force Stop
    • App launch time upon Force Kill
    • App's busiest screen openning time
    • Scrolling latency in different parts of the app
    • Contact loading time in Compose screen

    Hike App Benchmark

    We still have a long way to go in terms of our pref-testing journey at Hike. But we feel, we've some key learnings, which would be worth while to share with the community. Join us, for a fast paced perf-testing session.