Hike is used by 100 Million users and many of our users have cheap smart phone (~ $120 USD) that can install no more than 3 mobile apps.

So the questions is: Should testing of app be limited to its functionality? At Hike, we believe "Performance is Queen!" For our users, if we misuse the critical resources such as Battery, CPU, Network and Memory, its a deal-breaker. Hence pref-testing is very important.

During the initial days of Hike, we were very reactive and only did (manual) perf testing, when our users reported issues.

Now, every Sprint (2 weeks) and every public release (monthly), we run our automated perf tests. We measure our app's performance using several app specific use-cases on 4 key areas:

  • CPU,
  • Memory,
  • Battery and
  • Network (data consumption.)

Hike's CPU Utilization

We also benchmark the following scenarios for app latency:

  • App launch time upon Force Stop
  • App launch time upon Force Kill
  • App's busiest screen openning time
  • Scrolling latency in different parts of the app
  • Contact loading time in Compose screen

Hike App Benchmark

We still have a long way to go in terms of our pref-testing journey at Hike. But we feel, we've some key learnings, which would be worth while to share with the community. Join us, for a fast paced perf-testing session.

 
6 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/structure of the Session

  1. Challenges with utilization of mobile resources.
  2. Tool selection
  3. How/What to benchmark in performance.
  4. App specific parameters to benchmark
  5. How hike ensures the performance.
  6. Driving automated tests to measure performance in mobile.
  7. Next Steps in our Journey (making Perf-tests part of CI builds, etc.)

Learning Outcome

  1. Bench-marking mobile app on the parameters such as CPU, Memory utilization, Data consumption and Battery usage.
  2. Bench-marking app specific tests
  3. Automating Pref-Tests

Target Audience

Performance Testers, QA Managers, QA Leads, Tool Developers

schedule Submitted 1 year ago

Comments Subscribe to Comments

comment Comment on this Proposal

    • Liked Naresh Jain
      keyboard_arrow_down

      Naresh Jain - Q & A with the Selenium Committee

      Naresh Jain
      Naresh Jain
      Founder
      ConfEngine.com
      schedule 1 year ago
      Sold Out!
      45 mins
      Keynote
      Intermediate

      Q & A with the Selenium Committee

    • Liked Simon Stewart
      keyboard_arrow_down

      Simon Stewart - Selenium: State of the Union

      Simon Stewart
      Simon Stewart
      WebDriver Creator
      Facebook
      schedule 1 year ago
      Sold Out!
      45 mins
      Keynote
      Intermediate

      Selenium: State of the Union

    • Liked Pooja Shah
      keyboard_arrow_down

      Pooja Shah - Can we Have it All! {Selenium for web, mobile and Everything what a Product needs}

      Pooja Shah
      Pooja Shah
      Lead Automation Engineer
      MoEngage
      schedule 1 year ago
      Sold Out!
      45 mins
      Experience Report
      Advanced

      Problem Statement

      Expected Result: Mobile is taking over the world and wow! my product works awesomely everywhere.

      Actual Result:  OMG! it breaks on iOS 6 :-( 

      Holy Jesus! did we also test on firefox version 30.0 on Windows machine ?? innocent

      Having an application on all major platforms(Desktop Web, Mobile Web, Mobile Native apps etc.) brings a daunting requirement of verifying every single feature before giving a +1 for release and so it becomes essential for the QA folk to test and provide proper feedback as quickly as possible, which immediately takes the complete reliance only on manual testing out of the question and pushes for the need for automated testing with scalable automation framework embracing any product need in the future.

      We surely have 5 points to be answered before we think about such solution :

      1. Do we have a single test code which can test the product everywhere with a simple mechanism to trigger and manage them?
      2. Where is the plan to reduce Time To market having so many tests running before each code push?
      3. Do we have 1 click solution to monitor all the test results in one go to assert the state of ThumbsUp for release?
      4. Is continuos integration in place?
      5. How can I integrate all of the above 4 points using the same beautiful tool Selenium along with other aligned open-source projects like Appium, Shell and Jenkins?
    • Liked Marcus Merrell
      keyboard_arrow_down

      Marcus Merrell - Automated Analytics Testing with Open Source Tools

      45 mins
      Talk
      Intermediate

      Analytics are an increasingly important capability of any large web site or application. When a user selects an option or clicks a button, dozens—if not hundreds—of behavior-defining “beacons” fire off into a black box of “big data” to be correlated with the usage patterns of thousands of other users. In the end, all these little data points form a constellation of information your organization will use to determine its course. But what if it doesn’t work? A misconfigured site option or an errant variable might seem insignificant, but if 10,000 users are firing 10,000 incorrect values concerning their click patterns, it suddenly becomes a problem for the QA department―a department which is often left out of conversations involving analytics.

      Join Marcus Merrell to learn how analytics work, how to get involved early, and how to integrate analytics testing into the normal QA process, using Selenium and other open source tools, to prevent those misfires from slipping through.

    • Liked Dakshinamurthy Karra
      keyboard_arrow_down

      Dakshinamurthy Karra - Java Swing, Java FX application testing using Selenium WebDriver

      45 mins
      Demonstration
      Intermediate

      Marathon is a open source test automation suite for Java Swing and Java/FX applications. Marathon provides Selenium/WebDriver bindings for executing test scripts agains Java application.

      In this workshop we explore steps by which you can set up an environment for testing a Java/Swing application.

    • Liked Vivek Upreti
      keyboard_arrow_down

      Vivek Upreti - Cross-platform, Multi-device Instant Communication Testing in Parallel using Appium and Docker

      45 mins
      Demonstration
      Intermediate

      Today over 100 million users share over 40 billion messages per month on Hike. Its not just simple 1:1 chat messages. Users can do a VoIP call or share rich multi-media content in 8 different languages in group chats with hundreds of members. User can transfer large (upto 100 MB) file using Wifi-Direct .i.e. device to device file transfer without using Internet. And many more features. How do you ensure that you can roll out a release every month without breaking any of these features?

      With such a large user based, which is very sensitive to frequent upgrades due to data consumption cost, rigorously testing the app becomes extremely critical.

      When we started our automation journey in 2014, we were looking for a device lab which can simplify our testing effort. However we gave up and ended up building our own setup. The reason being, we require multiple devices that can communicate with each other for a single test. And we have 6000+ such tests, which we want to run in parallel. While many device labs allow you to run tests in parallel, they don't allow the devices to communicate with each other. Also its not possible to run the same test across multiple devices. Imagine testing a group-chat flow with photo sharing or imagine the device to device file transfer using hotspot. How would you test these features?

       

      If this interests you, join us and we'll share our learning trying to achieve this at Hike.

    • Liked Bret Pettichord
      keyboard_arrow_down

      Bret Pettichord - Checking as a Service

      Bret Pettichord
      Bret Pettichord
      Software Architect
      HomeAway
      schedule 1 year ago
      Sold Out!
      45 mins
      Keynote
      Beginner

      This talk suggests a reframe in how we understand the business value of automated testing. One shift is to see automation as "checking" rather than "testing". Another is the shift from software delivery to service delivery, including fully embracing DevOps. The resulting approach could be called Checking as a Service or CheckOps, and forces us to rethink traditional automation priorities. In this talk, Bret will explain how change in approach has affected teams he's worked with and how you can use it to improve your ability to deliver valued services.

    • Liked Adam Carmi
      keyboard_arrow_down

      Adam Carmi - Advanced Automated Visual Testing With Selenium

      Adam Carmi
      Adam Carmi
      Co-Founder and VP R&D
      Applitools
      schedule 1 year ago
      Sold Out!
      45 mins
      Talk
      Beginner

      Automated visual testing is a major emerging trend in the dev / test community. In this talk you will learn what visual testing is and why it should be automated. We will take a deep dive into some of the technological challenges involved with visual test automation and show how modern tools address them. We will review available Selenium-based open-source and commercial visual testing tools, demo cutting edge technologies that enable running cross browser and cross device visual tests at large scale, and show how visual test automation fits in the development / deployment lifecycle.

      If you don’t know what visual testing is, if you think that Sikuli is a visual test automation tool, if you are already automating your visual tests and want to learn more on what else is out there, if you are on your way to implement Continuous Deployment or just interested in seeing how cool image processing algorithms can be, this talk is for you!