location_city Online schedule Jul 30th 11:00 - 11:45 AM IST place Zoom people 86 Interested

Application testing is one of the biggest barriers to achieving truly Continuous Deployments because it's use-case specific. Developers often avoid writing test cases because its time consuming, needs to be maintained for every change and metrics like coverage don't necessarily guarantee quality. In this session, we'll talk about how we could capture test cases from traffic data, how all infrastructure can be mocked automatically and how application writes could be safely replayed.

 
Keploy is an open-source no-code API Testing Platform that generates test cases and data mocks from API calls.

We'll walk through examples of how keploy can work alongside existing testing frameworks and capture test cases quickly and mock infrastructure without needing to write Unit API test cases. We'll also cover how these test cases would evolve as the application grows. The core contributors to Keploy will provide an overview of its features and capabilities, and how it is used at scale covering use-cases across microservices across various programming languages.
 
 

Outline/Structure of the Demonstration

  • Shift-Left Impact of Testing - 3 mins
  • Cost and Effort spent on different testing strategies - 3 mins 
    • Manual 
    • Automated
    • Record-Replay
    • Keploy
  • Problems with Data Mocks - 2 mins
  • Problems with Testing Infra - 2 mins
  • How to Record API calls as Test Cases - 15 mins (with Demo)
  • How to auto-record data mocks and other infra calls - 5 mins 
  • How to Replay Test Suites with the application locally - 3 mins
  • Noise filtering of test-cases - 3 mins 
  • Integration with native unit-test libraries - 2 mins
  • Tracking Unit Test Coverage - 1 mins

Learning Outcome

  • Record-Replay end-to-end API Test Cases
  • Generating data mocks from API calls
  • Unifying test coverage of unit tests and API tests and increasing code coverage with the test-suite
  • Black-box testing of microservices
  • API Chaining Tests 

Target Audience

QAs/SDETs who are writing automation test scripts for end-to-end API testing. Developers who are writing unit test cases. If you are writing data mocks. If you set up test environments by taking data dumps or snapshots. If you use tools like Selenium, REST Assured, Postman.. etc for API Testing

Prerequisites for Attendees

Basic understanding of writing API or unit Test cases and data mocks. 

Slides


Video


schedule Submitted 10 months ago

  • Puja Chiman Jagani
    keyboard_arrow_down

    Puja Chiman Jagani - Selenium has a new trick up its sleeve to track failures

    Puja Chiman Jagani
    Puja Chiman Jagani
    Team Lead
    Browserstack
    schedule 10 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    As our systems and tests grow more and more complex we need to make sure that we have the tools to capture the root causes without spending hours or days chasing them down. This is where Observability becomes our best friend. Observability allows us to see what is going on inside a system based on what we think is crucial without trawling through logs! Just like any piece of software should be robust, scalable, maintainable, and reliable, it should also be observable. Observability makes the journey from identifying unexpected problems to identifying the root cause easier.

    To do so, the code should record as much useful granular information as possible. Metrics, logs, and traces are three known ways of encapsulating granular information. They are the primary sources of information to help determine the state of the system at any given point in time. 

    Selenium 4 introduced a fully distributed Grid with multiple components that communicate over the network. Troubleshooting and diagnosing problems in this setup is a challenge. To tackle this, Selenium integrated OpenTelemetry’s tracing and event logs.  This feature is now available out of the box by default when using Selenium.
    The users now have more power in their hands!

    I will dive into Selenium's observability journey by discussing: 

    1. What is observability?

    2. Need for observability

    3. Understanding the three pillars of observability: Metrics, Logging, and Tracing

    4. Generating telemetry data alone does not suffice. It is a process from design to deployment. 

    5. Full-stack tracing in Selenium (Grid and Java client library)

    6. Explain how we, at BrowserStack, are benefiting and exposing this information to our users.

  • Amuthan Sakthivel
    keyboard_arrow_down

    Amuthan Sakthivel - CLEAN TEST DESIGN PRACTICES FOR EFFECTIVE SELENIUM AUTOMATION FRAMEWORK

    Amuthan Sakthivel
    Amuthan Sakthivel
    SDET
    Clipboard Health
    schedule 10 months ago
    Sold Out!
    45 Mins
    Demonstration
    Intermediate

    Selenium is an amazing library for UI Automation. However, using it in a project needs a proper test design, a good approach and the best framework. I have listed some of the challenges that most people face during automation and will also brief on how we can solve those problems with effective design and approach.

    Key Challenges :

    1. Field level validations on a form containing several fields. (Many people will ignore these tests in automation as it may increase the number of lines of code and number of tests. Maintaining them is a difficult task)

    2. Verifying the state of the web element before operating on it. (It is imperative to check whether a web element is present or visible or clickable or needs a scroll to operate. Also, different elements need different explicit wait times. Most probably we will have number of methods like waitForElementToBeClickable, waitForElementToBeVisible. This again results in increased lines of code. )

    3. Assertion of multiple components on a page. (Sometimes we want to validate several items on a page and writing methods like getTitle, isCompanyLogoPresent, isFooterMenuPresent either results in multiple tests or poor test code spoiling the readability.)

    4. CI/CD integration. ( Most of the companies were using Jenkins as their CI/CD tool to schedule tests and this is most probably maintained by Devops team. To set up Jenkins job we need a lot of permission. At the worst we need a machine/infra to run and schedule our tests)


    How we can solve these commonly occurring problems?

    1. With the advent of functional programming, we can pass different behaviours to the test methods. In the demo, I will use BiPredicate Interface implementations to solve this problem with clean design.

    2. Annotations in Java is very powerful but hardly used in Test Automation Frameworks. I will use reflections and annotation to solve this problem with a much cleaner design.

    3. We can leverage Custom Validator classes and AssertJ to write some effective readable tests.

    4. We can leverage Github Actions and the Github runner to set up Selenium Grid Infrastructure and run our tests without any additional infra.

    Tech Stack : Java, Functional Interfaces, Selenium, AssertJ, Github Actions

  • Gaurav Mahajan
    keyboard_arrow_down

    Gaurav Mahajan - A Dossier - Shift left, Move Fast

    Gaurav Mahajan
    Gaurav Mahajan
    Technical Director
    Globant
    schedule 10 months ago
    Sold Out!
    45 Mins
    Case Study
    Intermediate

    From decades, one of the key goals for organizations is how to reduce time to market, decrease time taken by value streams and ensure quick handovers in assembly lines. 

    When we speak of overall assembly line or Road to production in software development context, having minimal and yet strong check and balances is a desired milestone to achieve, this has been articulated as implementation of Test Pyramid, reducing dependency on mundane, manual or long list of integration test (even automated) and Shifting Left.

    This conversation is going to be focused on one such implementation example where I will try to elaborate

    • A proposed approach to SHIFT LEFT and implementation of test pyramid model for automated tests, This will involve "identify, prioritize and automate" way with core emphasis on API tests and how we can de congest regression/ integration tests.
    • Identifying a good starting point for teams (as per their different maturity) and how they can navigate and ensure progress 
    • How the test environment evolve to achieve this stage. 
help