Java Swing, Java FX application testing using Selenium WebDriver

location_city Bengaluru schedule Jun 24th 01:30 - 02:15 PM place ESquire Hall 1

Marathon is a open source test automation suite for Java Swing and Java/FX applications. Marathon provides Selenium/WebDriver bindings for executing test scripts agains Java application.

In this workshop we explore steps by which you can set up an environment for testing a Java/Swing application.


Outline/Structure of the Demonstration

  1. Quick introduction to Marathon and Java Drivers
  2. Setting up a eclipse environment
  3. Writing your first test
  4. Finding components
  5. working with tables, trees etc.
  6. Q&A

Learning Outcome

  1. Understanding Java driver architecture
  2. Setting up a test project
  3. Writing Unit and Functional tests for Java application

Target Audience

developers, test automation engineers



schedule Submitted 5 years ago

Public Feedback

    • Naresh Jain

      Naresh Jain - Q & A with the Selenium Committee

      Naresh Jain
      Naresh Jain
      schedule 5 years ago
      Sold Out!
      45 Mins

      Q & A with the Selenium Committee

    • Simon Stewart

      Simon Stewart - Selenium: State of the Union

      Simon Stewart
      Simon Stewart
      Project Lead
      The Selenium Project
      schedule 5 years ago
      Sold Out!
      45 Mins

      Selenium: State of the Union

    • Kumar Pratyush

      Kumar Pratyush / Naresh Jain - Performance Testing a Mobile App Used by 100M Users

      45 Mins
      Case Study

      Hike is used by 100 Million users and many of our users have cheap smart phone (~ $120 USD) that can install no more than 3 mobile apps.

      So the questions is: Should testing of app be limited to its functionality? At Hike, we believe "Performance is Queen!" For our users, if we misuse the critical resources such as Battery, CPU, Network and Memory, its a deal-breaker. Hence pref-testing is very important.

      During the initial days of Hike, we were very reactive and only did (manual) perf testing, when our users reported issues.

      Now, every Sprint (2 weeks) and every public release (monthly), we run our automated perf tests. We measure our app's performance using several app specific use-cases on 4 key areas:

      • CPU,
      • Memory,
      • Battery and
      • Network (data consumption.)

      Hike's CPU Utilization

      We also benchmark the following scenarios for app latency:

      • App launch time upon Force Stop
      • App launch time upon Force Kill
      • App's busiest screen openning time
      • Scrolling latency in different parts of the app
      • Contact loading time in Compose screen

      Hike App Benchmark

      We still have a long way to go in terms of our pref-testing journey at Hike. But we feel, we've some key learnings, which would be worth while to share with the community. Join us, for a fast paced perf-testing session.

    • Pooja Shah

      Pooja Shah - Can we Have it All! {Selenium for web, mobile and Everything what a Product needs}

      Pooja Shah
      Pooja Shah
      Lead Automation Engineer
      schedule 5 years ago
      Sold Out!
      45 Mins
      Experience Report

      Problem Statement

      Expected Result: Mobile is taking over the world and wow! my product works awesomely everywhere.

      Actual Result:  OMG! it breaks on iOS 6 :-( 

      Holy Jesus! did we also test on firefox version 30.0 on Windows machine ?? innocent

      Having an application on all major platforms(Desktop Web, Mobile Web, Mobile Native apps etc.) brings a daunting requirement of verifying every single feature before giving a +1 for release and so it becomes essential for the QA folk to test and provide proper feedback as quickly as possible, which immediately takes the complete reliance only on manual testing out of the question and pushes for the need for automated testing with scalable automation framework embracing any product need in the future.

      We surely have 5 points to be answered before we think about such solution :

      1. Do we have a single test code which can test the product everywhere with a simple mechanism to trigger and manage them?
      2. Where is the plan to reduce Time To market having so many tests running before each code push?
      3. Do we have 1 click solution to monitor all the test results in one go to assert the state of ThumbsUp for release?
      4. Is continuos integration in place?
      5. How can I integrate all of the above 4 points using the same beautiful tool Selenium along with other aligned open-source projects like Appium, Shell and Jenkins?
    • Marcus Merrell

      Marcus Merrell - Automated Analytics Testing with Open Source Tools

      45 Mins

      Analytics are an increasingly important capability of any large web site or application. When a user selects an option or clicks a button, dozens—if not hundreds—of behavior-defining “beacons” fire off into a black box of “big data” to be correlated with the usage patterns of thousands of other users. In the end, all these little data points form a constellation of information your organization will use to determine its course. But what if it doesn’t work? A misconfigured site option or an errant variable might seem insignificant, but if 10,000 users are firing 10,000 incorrect values concerning their click patterns, it suddenly becomes a problem for the QA department―a department which is often left out of conversations involving analytics.

      Join Marcus Merrell to learn how analytics work, how to get involved early, and how to integrate analytics testing into the normal QA process, using Selenium and other open source tools, to prevent those misfires from slipping through.

    • Vivek upreti

      Vivek upreti / Naresh Jain - Cross-platform, Multi-device Instant Communication Testing in Parallel using Appium and Docker

      45 Mins

      Today over 100 million users share over 40 billion messages per month on Hike. Its not just simple 1:1 chat messages. Users can do a VoIP call or share rich multi-media content in 8 different languages in group chats with hundreds of members. User can transfer large (upto 100 MB) file using Wifi-Direct .i.e. device to device file transfer without using Internet. And many more features. How do you ensure that you can roll out a release every month without breaking any of these features?

      With such a large user based, which is very sensitive to frequent upgrades due to data consumption cost, rigorously testing the app becomes extremely critical.

      When we started our automation journey in 2014, we were looking for a device lab which can simplify our testing effort. However we gave up and ended up building our own setup. The reason being, we require multiple devices that can communicate with each other for a single test. And we have 6000+ such tests, which we want to run in parallel. While many device labs allow you to run tests in parallel, they don't allow the devices to communicate with each other. Also its not possible to run the same test across multiple devices. Imagine testing a group-chat flow with photo sharing or imagine the device to device file transfer using hotspot. How would you test these features?


      If this interests you, join us and we'll share our learning trying to achieve this at Hike.