Conference Time
Local Time

Workshop Day

Thu, Sep 10
Timezone: Asia/Kolkata (IST)
10:00

Day 1

Fri, Sep 11
09:00
  • Added to My Schedule
    keyboard_arrow_down
    Anne-Marie Charrett

    Anne-Marie Charrett - 2020 Vision:: Leadership

    schedule  09:00 - 09:45 AM IST place Online people 72 Interested star_halfRate

    It’s 2020 and a new decade awaits us. 100 years ago, our world was famous for the Russian Revolution and World War 1.

    In our century, we are also facing tremendous upheaval. The bushfires in Australia, floods in Indonesia. We are seeing the impact of climate change before our eyes.

    Technology-wise, agile has become mainstream. Hardware is cheap, CI and CD are commonplace. Robotics and Machine Learning is a reality. This is the age we live in. This is the now.

    How do we begin to plan for such a reality? How do we test software in such an ecosystem? Should we even try? How do we maintain skills, when frameworks change as rapidly as nail varnish on a teenager?

    As leaders in software testing, these are the questions we need to be able to answer. We need leadership now more than ever. We can’t predict the future, but we can prepare ourselves to be able to deal with change, to develop strategies that facilitate rapid learning and rapid change.

    We can upskill our people to be able to handle inconsistency and complexity. To know when technology is beneficial and when to rely on our ability to think critically.

    We must create environments where talent can grow and thrive. We learning (and failure) are embraced. Today is the day, we become the test leaders of the future. Are you with me?

09:45

    Welcome Address & Conference Overview - 20 mins

10:05

    Coffee Break - 25 mins

10:30
11:30
  • Added to My Schedule
    keyboard_arrow_down
    Rajdeep Varma

    Rajdeep Varma - The Joy Of Green Builds - Running Tests Smartly

    schedule  11:30 AM - 12:15 PM IST place Online Meeting 1 people 31 Interested star_halfRate

    So you have got a few UI tests and they are running in parallel, great! However, life will not be so sweet once these 'a few' turns into 'a lot'. We grew from a few to 1500 UI tests (although not particularly proud of this number, there are situations and reasons)

    We started with a simple parallel distribution of tests 3 years ago. As test count increased failure count run time increased along with increased flaky tests. Mobile tests had their own challenges (eg. device dropping-off, random wi-fi issues, etc) To keep up with this, we created a queue and workers based solution which could distribute the tests more efficiently (https://github.com/badoo/parallel_cucumber). Over time, we made more improvements, in particular:

    • Segregation of failures based on infrastructure issues and re-queue the tests
    • If a device/emulator malfunction, rescue the tests to another device
    • Repeating a single test on 100s of the worker in parallel to detect flakiness
    • Repeat a test if a known network issue
    • Terminating the build early if more than a certain number of tests have failed
    • Health check of each device, before each test to ensure reliability
    • Muting a test if failure is known, and highlight outdated mutes if the related task is fixed

    In this talk, I will talk about the initial challenges with running UI tests in parallel (Selenium and Appium), how we approached the queue based solution and continuous improvement of this solution; finally, how attendees can use it at their workplace or create their own solution based on our learnings.

  • Added to My Schedule
    keyboard_arrow_down
    Harinee Muralinath

    Harinee Muralinath - Building Security into your Continuous Delivery pipelines

    schedule  11:30 AM - 12:15 PM IST place Online Meeting 2 people 25 Interested star_halfRate

    Functional testing has gone through a paradigm shift from an afterthought in the waterfall model to being Agile and moved early and testing often as possible by the shift-left process. We strive to get feedback early, often and continuously. However, application security has been mostly an afterthought still, and awaits for 'vulnerability assessment' and 'penetration testing' towards the end of the app life cycle, just before releasing.

    Including Security as a part of your DevOps process is a revolutionary shift, and it carries 3 main aspects - culture, process, and tools.

    In this talk, we will focus on the ‘tools’ aspect, especially in a CI/CD pipeline. And a demonstration of setting up a build pipeline, with the ideal layers at which security testing should be added. There are vast open communities which strive towards bringing security to everyone's hands, and thus make the digital space safe. Thanks to them, and the ever-growing need of defensive security, there are a lot of tools available which a developer/QA can put in their pipelines to get appropriate feedback. I will share my experience in analyzing the diverse sets of such tools, what they do, why they are important, and by what parameters should you measure the right tool for your project.

    The categories we will talk about will include SAST, DAST, Dependency checking, Container scanning, and Secret scanning.

  • Added to My Schedule
    keyboard_arrow_down
    Gaurav Singh

    Gaurav Singh - How to build an automation framework with Selenium : Patterns and practices

    schedule  11:30 AM - 12:15 PM IST place Online Meeting 3 people 75 Interested star_halfRate

    With an ever increasing no of businesses being conducted on web the testing need to write automated tests for the app's UI is something that can never be ignored. As you all know Selenium provides an API that enables us to do this quite effectively.

    However, when tasked with setting up the automation framework, there are a lot of questions that arise in the minds of aspiring test developers regardless of what level they are in their career.


    Some of such questions are:

    1. How does one actually go about the business of building a robust and effective automation framework on top of selenium?
    2. What are the elementary building blocks to include in the framework that an aspiring automation developer should know of?
    3. How should we model our tests? XUnit style vs BDD?
    4. Are there good practices, sensible design patterns and abstractions that we can follow in our code?
    5. What are some of the anti patterns/common mistakes we should avoid

    A lot of literature, documentation and blogs exists on these topics on the web already.

    However In this talk,

    I would combine this existing knowledge and my years of experience in building automation frameworks and breakdown these elements and walk you through exactly the sort of decisions/considerations and practices that you can take while starting to implement or improve the UI automation for your team.

    Hope to see you there!

12:30
  • schedule  12:30 - 01:15 PM IST place Online Meeting 1 people 43 Interested star_halfRate

    Browser tests are known to be the flakiest ones. This is partly because browser infrastructure is complicated to maintain. But the second reason is – mainstream browser automation tools such as Selenium server are far from being efficient.

    During my previous talks I was speaking about Selenoid - a truly efficient replacement of the standard Selenium server. This year I would like to do a live demonstration how to organize a fault-tolerant and easily scalable Selenium cluster using virtual machines in the cloud. I will start by setting up Selenoid and show its powerful features like video recording, live tests debugging, manual testing and many more. Then I will configure Selenoid to send logs and recorded videos to S3-compatible storage. Finally we will run a Ggr load balancer instance allowing to use all running Selenoid nodes and organize a single entry point to the cluster.

  • Added to My Schedule
    keyboard_arrow_down
    Gopinath Jayakumar

    Gopinath Jayakumar / Babu Narayanan Manickam - Expanding boundaries of WebDriver with DevTools Integration

    schedule  12:30 - 01:15 PM IST place Online Meeting 2 people 56 Interested star_halfRate

    Problem Statement

    Though Selenium taking most of the stake in the UI test automation tool market comfortably, there were always challenges that were for selenium test automation engineers are handicapped with especially when dealing with modern JS technologies. For example,

    • dealing with DOM elements to solve the stale / loading / non-interactable elements,
    • handling full screenshots to know how the elements at the left, bottom, etc,
    • measuring the performance of request and response resources at different speeds,
    • monitoring the memory of the pages, controls, etc,
    • attaching to an existing browser for debugging the failed scripts and many more.

    These problems were largely resolved with the integration of selenium with devtool protocols. And that makes the selenium engineer's life merrier than before.

    Why this proposal can be different from others?

    1. Our solution can be executed as independent with chrome dev tools or with selenium. That gives the power back to the automation engineer to choose what and how to debug/run their tests.
    2. We used this solution for one of our largest enterprise customers and moved this solution to public repository this week (for this conference and beyond). With that said, we tested reasonably with more than 10,000+ test scripts and more than 1M tests.
    3. The present solution is completely (100%) packed with all Chrome Dev Tools API in Java and with that said, any Java Selenium automation engineer can bind in minutes for their existing code base with no additional dependencies.
    4. Finally, we love to present at the local home to start our selenium conference campaign. Where else?

    Solution:

    The present proposal largely connected with Chrome and Selenium in Java language. However, there is no limitation to expand the boundaries for other language bindings and browsers.

    Google Chrome, the most picked browser for browsing, which makes it the primary concentration for developers and testers. DevTools is one such boon for developers, testers especially the new aged test automation engineers. With that said, we built the following design pattern to allow chrome dev tools API to marry Selenium using debugger address / remote targets.

    Selenium Devtools

  • Added to My Schedule
    keyboard_arrow_down
    Tarun Narula

    Tarun Narula / sandeep yadav - What to do when tests fail

    schedule  12:30 - 01:15 PM IST place Online Meeting 3 people 40 Interested star_halfRate

    What to do when your tests fail? Read on...

    Functional automation is known to be flaky. A test passes sometimes & fail the other times. The failure can be attributed to multiple factors. We need to find out the root cause & then work towards fixing it for increasing automation tests reliability.

    In this talk, we will not only be discussing causes that lead to a test failure, but we will also talk about prevention, early detection & fixing these failures for good.

    We will discuss some common test failure causes such as locator changes, browser compatibility issues, coding bloopers etc

    You will get to know how you can get alerted early about any test failures. We will be discussing topics such as running tests on under development builds for getting early feedback, triggering slack/SMS/email notifications with failure details for immediate redressal and many others.

    You will get to know how to prevent failures by building robust locators, exception handling, making use of APIs for test data setup, building atomic tests, making use of waits, retrying your failed tests, rebuilding your Jenkins jobs automatically based upon a failure percentage threshold & so on.

    At the end of this talk, you will be confident on how to deal with your failing tests!!

13:15

    LunchBreak - 60 mins

14:15
  • Added to My Schedule
    keyboard_arrow_down
    Tomasz Konieczny

    Tomasz Konieczny - Serverless - how to speed up tests over 300 times and achieve continuous feedback?

    schedule  02:15 - 03:00 PM IST place Online Meeting 1 people 64 Interested star_halfRate

    Automated tests can provide results faster and it’s possible to execute them more frequently than manual ones. That allows to test earlier in the development process, decrease overall time needed for tests and what is probably the most important it’s possible to release and deliver business value faster and more frequently.

    But what if we have more and more tests and even automated execution of them takes too much time - 10 minutes... 30 minutes... maybe even hours? Should we consider the ability to execute full tests set just a few times a day as something normal? Is adding more compute resources the only option to reduce the execution time? Or maybe there are too many high-level tests and some of them should be replaced by low-level ones according to tests pyramid? Is the tests pyramid still valid in the cloud world?

    During the presentation you will see how the serverless cloud services like AWS Lambda may be used to run tests in the highly parallelized environment that can speed up execution even hundreds of times.

  • Added to My Schedule
    keyboard_arrow_down
    Jesus Sanchez Martinez

    Jesus Sanchez Martinez - Test and monitor one website is not that hard, but what if you need to do it to over 40 websites?

    schedule  02:15 - 03:00 PM IST place Online Meeting 2 people 25 Interested star_halfRate

    Onestic QA department made and maintained test suites, which was a huge bottleneck in our development process.

    In order to solve it, management bought us an idea: our developers must be able to build their tests using a DSL framework without friction. QA maintain a big library that provides resources to developers. From there, they were free to extend this library in each project. With all of this we have a CI process, a 1 hour execution to monitor our results and of course our bot, SpongeBot.

    SpongeBot can check 40+ e-commerce sites, with 4+ environments, for desktop and mobile platforms always available for developers.

    With this solution we achieve to decentralize the work, add value testing production and the confidence in SpongeBot notifications if something goes wrong.

  • Added to My Schedule
    keyboard_arrow_down
    David Burns

    David Burns - Selenium: Giblets and all

    schedule  02:15 - 03:00 PM IST place Online Meeting 3 people 30 Interested star_halfRate

    Selenium has done a pretty good job in keeping the API surface friendly and usable, but what actually happens when you call some of the commands? In this talk, David will walk you through what happens when you make a call in your test, how it gets to the browser, what happens in the browser, and how it returns all the way back to your test.

     

    Selenium is designed so that each of the commands works synchronously, so you know that a command has finished before it moves onto the next. This creates some interesting problems in browsers since they are mostly designed around asynchronicity.

     

    We will start with how each of the bindings communicates with the browser and then move on to how navigation works. David will show all the different aspects that we need to figure out to tell if a page is “loaded”. He will also show where it goes horribly wrong and how you can write code in your tests to stabilise around these “anomalies”.

     

    From there we will have a look at how clicks work from making sure they are trusted to what if they cause a navigation.

15:15
  • Added to My Schedule
    keyboard_arrow_down
    Deepak Koul

    Deepak Koul - Is quality really everyone's responsibility - The Quality Accountability conundrum

    schedule  03:15 - 03:35 PM IST place Online Meeting 1 people 23 Interested star_halfRate

    "Quality is everyone's responsibility" has to be one of those phrases which looks great on t-shirts and posters but when put into action, often fails.
    There is a funny leadership story which goes like this " There was an important job to be done and Everybody was sure that Somebody would do it. Anybody could have done it, but Nobody did it"
    Therefore assuming that everyone in your team (developers, designers, QEs or analysts) would be instinctively inclined to incorporating quality in their day to day work is a terrible assumption to make.
    Quality is everyone's responsibility might be true but it cannot work in isolation, it has to be supplemented with an accountability framework.
    In this talk, I am going to present exactly that framework and help people especially leaders - technical and people, implement proactive quality processes in their teams.
    The information presented in this talk does not require audience to have any technical knowledge and applies to all the roles.

  • Added to My Schedule
    keyboard_arrow_down
    Shi Ling Tai

    Shi Ling Tai - Start with the scariest feature - how to prioritise what to test

    schedule  03:15 - 03:35 PM IST place Online Meeting 2 people 41 Interested star_halfRate

    It can be intimidating for inexperienced teams embarking on their test automation journey for an existing code base. There is so much to test, and so many ways to test. I often see teams stuck with debating on where to start and what tools to use and best practices:

    "We should start from unit tests"

    "No, integration tests are better!"

    "Should we use tool A or tool B?"

    I see this play out all the time, and I've been there before. And the worst that could happen is decision paralysis and inaction.

    The bigger question really is "What to test?".

    My rule of thumb is "Start with the scariest code". I'll share with you my framework for evaluating the ROI of writing a test for a feature and prioritising what to test.

  • Added to My Schedule
    keyboard_arrow_down
    Sujasree Kurapati

    Sujasree Kurapati - Accessibility Testing Isn't Hard.

    schedule  03:15 - 03:35 PM IST place Online Meeting 3 people 37 Interested star_halfRate

    Accessibility Testing is often ignored or placed as the last priority during the web or mobile development cycle. Sometimes a11y testing is not done until a lawsuit gets triggered or your potential customer declines to procure your product because it’s not accessible. Ignoring a11y can cost an organization dearly as it takes more time and effort to fix accessibility issues in existing websites post-release, than creating a fully accessible site right from the start (Revenue Loss). Your customers might be quickly moving towards solutions that are accessible while you go back trying to fix the accessibility issues (Customer Loss). Additionally an inaccessible platform fails to seek the attention of almost one billion people (Potential Prospects) – 15% of the world’s population.

    In this 20 minutes session we will discuss about a11y APIs that would assist Quality Assurance engineers with various methods to automate a11y testing against accessibility standards, efficiency that automation tools can bring in making web and mobile properties accessible and how easy it is to automate accessibility testing using open sources API (axe-core engine, API and axe for Android). The attendees will also learn about the ROI that an organization could see by integrating accessibility testing in their QA Selenium framework.

15:35

    Coffee Break - 25 mins

16:00
17:00
17:45

    Lightning Talks - 30 mins

Day 2

Sat, Sep 12
09:00
  • Added to My Schedule
    keyboard_arrow_down
    Narayan Raman

    Narayan Raman - A Tale of Two Automation Tools

    schedule  09:00 - 09:45 AM IST place Online people 63 Interested star_halfRate

    Selenium and Sahi started as open source tools around the same time and from the same company. Both tried to solve the problem of UI automation and both have been successful in their own ways. But the resemblance ends there. Selenium became Webdriver and became one of the most popular free tools in testing. Sahi became commercial Sahi Pro and is successfully used in various organizations to automate complex UI's spanning web, desktop and mobile applications. Right from who the user of the tool is to whether the tool should be open source or commercial, there are various philosophical, technical and commercial junctures where Selenium and Sahi have diverged. Based on our journey of over 10 years, this talk reflects on those divergent decision points, the philosophy behind those and what their outcomes have been.

09:45

    Special Announcements - 20 mins

10:05

    Coffee Break - 25 mins

10:30
  • schedule  10:30 - 11:15 AM IST place Online Meeting 1 people 46 Interested star_halfRate

    "All tests in today's automated regression run have been marked as Untested. What happened?"

    "No notifications are being sent for test runs on the channel"

    "I pulled latest code, and the framework dependency shows compilation error"

    "What does this new method in the framework do?"

    How often do you hear such things within your team?

    As Quality champions, we need to walk the talk. When we expect our developers to write quality code, write unit tests, build features without introducing bugs, the onus lies on us (as test engineers) to do the same. With almost every test engineering team writing automated tests to check functionality of their products and services, it becomes very important to ensure that the test automation framework and the test scripts are bug-free and follow good standards of software engineering.

    It cannot be stressed enough that test automation code should be as good as production code. In order to build production-quality test automation framework and scripts, a number of steps can be taken at:

    1. Code & System Level

    2. Process & People Level

    Our test engineering team went through a transition from having random & unexpected failing test runs to having greater confidence in the quality of the tests. Learn from this case study of our journey to ensure that end-end UI automated tests are built with quality in mind. We will also see demonstration of some of the use cases.

  • Added to My Schedule
    keyboard_arrow_down
    Shama Ugale

    Shama Ugale - Webdriver connector for Botium - Tool for testing Conversational UI

    schedule  10:30 - 11:15 AM IST place Online Meeting 2 people 36 Interested star_halfRate

    Last year was dominated by the smart devices & voice based home assistants. These use the conversational interfaces unlike other applications to interact, built using advanced algorithms, ranging from natural language processing to AI/ML techniques. They are constantly learning by themselves improving the interactions with the user bringing up the challenge of non-deterministic output. To such interfaces, natural language is the input & we humans love having alternatives & love our synonyms, express using emojis gifs & pictures. Testing in this context moves to clouds of probabilities.

    Unfortunately Selenium cannot be used to automate such systems and hence Botium was designed.
    In this session I will cover the Selenium driver for Botium to automate E2E tests on Web UI and mobile along with testing strategy, testing NLP models & automating these tests to the CI/CD build pipelines with a DialogFlow based 'Coffee-Shop bot' as an example during my demo.

  • Added to My Schedule
    keyboard_arrow_down
    Rabimba Karanjai

    Rabimba Karanjai - Testing Web Mixed Reality Applications: What you need to know for VR and AR

    schedule  10:30 - 11:15 AM IST place Online Meeting 3 people 20 Interested star_halfRate

    There are already over 200 million users consuming VR applications by 2018. And with Google, Mozilla pushing WebXR capabilities in browser and vendors like BBC, Amnesty International, Universal, Disney, Lenskart and a lot of them adopting them to their websites, we will soon see a huge rise i demand for Web VR and Mixed Reality applications.

    But how do you test them in scale? How do you define "smooth" as opposed to just responsive?

    In this talk I will go over some key details about the WebXR specification. The work that Mozilla, Google and the W3C Immersive Web Group is doing. The differences between testing a regular web page and a Mixed Reality enabled one. What to watch for and how you can automate it.

11:30
  • schedule  11:30 AM - 12:15 PM IST place Online Meeting 1 people 34 Interested star_halfRate

    In this era of Digital transformation, clients have been demanding shorter and quicker releases. Shorter and quicker releases mean your team should not only be able to develop them at the required pace but also test and release them at a sustainable pace. User interface plays an important role in the client's business and there are organizations that release new features, fancy CSS regularly that support multiple browsers, multiple Operating systems, mobile devices. Verifying the frontend on this browser/device/OS matrix by humans is not only extremely time consuming but also prone to human errors. In fact, testing by humans should primarily focus on discovery leaving the repetitive and error-prone tasks to tools. Hence, automating Visual tests is becoming less of an optional activity and more of a must-have activity within the team. Ensuring visually perfect user experiences is equally important as to have the functionality working.

  • Added to My Schedule
    keyboard_arrow_down
    Anuradha Konduri

    Anuradha Konduri / Keertika Gangwar - How we are reducing Test Failure Analysis time using Machine Learning at Expedia

    schedule  11:30 AM - 12:15 PM IST place Online Meeting 2 people 43 Interested star_halfRate

    How many of us have spent those hours just to check Automation run reports to first determine whether it was an actual bug or environment specific issue or an automation issue. We agree that no matter how much robust we make our UI Automation frameworks, we always encounter Automation/Environment specific failures which increase the time spent on analysing those failures and spotting actual issues/defects. At some point, all of us have had a bad day where we see so many flaky tests, that we lose confidence in the reliability of UI Automation results and tend to ignore the results over a period of time. We propose a solution to help us overcome this problem using the most trending technology of last decade - Machine Learning. The fact that we run around 130k test runs a day with around 2.3 million test records saved in MongoDB every month motivated us to look into Machine Learning as an approach for the problem statement we have.

    What if we can use ML algorithms to find patterns in the day-to-day UI Automation error messages that we see, to tell us if it is - an actual bug or not !!! All of us use various Selenium based Test Automation frameworks like Cucumber, TestNG, Scalatest, Nightwatch JS etc. which have their own libraries to report any Test Validation/Automation failures, hence vary quite a lot in the formats. Also, it's difficult to find a common pattern in user defined error messages. A typical error message from UI Automation would have the - Message Stack Traces , other error data dumped from Selenium etc. We can argue that we can take only the Message part of this as input to be able to predict the outcome for us. However, we have seen many instances where these Messages are not very self-explanatory and we will have to look into the trace/error details to actually determine the root cause. Considering the whole error message is not as easy as it sounds.

    Hence, the problem we have at hand is unstructured text data. Our approach includes the steps -> Collect Training data (viz., pre-classified errors), Clean the data to be fed to the model, Identify a simple yet powerful algorithm such as SVM, Random Forest, Naive Bayes etc., Classifiers to work with, Tune the model to identify the right metrics to help us calculate the reliability of the resultant predictions.

    This can also be extended to other error messages like Javascript Error Messages, Splunk or Trace logs as well.

  • Added to My Schedule
    keyboard_arrow_down
    Rajni Singh

    Rajni Singh - End to end testing strategies for intelligently connected hybrid world of IoT

    schedule  11:30 AM - 12:15 PM IST place Online Meeting 3 people 22 Interested star_halfRate

    The intelligent mesh of devices is nothing but connecting millions of users, use cases, apps, and devices together to support application developed based on the internet of things. Currently, we see around the world that there are thousands of use cases, millions of apps, billions of users and trillions of things and if you think QA and testing in the interconnected intelligent devices then the scope would be very wide as verification and validation will be applicable in each interface and as and when grows.
    I will talk about the challenges faced during IoT application testing and how it can go wrong. To thorough test all the area with the given challenges and IoT test lab is setup. What all are the solution to overcome the challenges. Some important aspects like continuous integration of hybrid environment and testing with multiple devices and with millions of use cases, to improve the existing conventional method with intelligent automation, and most important is scalability and security.

    Although testing emerging technology and applications always is exciting, seeing their own strategies and tools fail in IoT testing can be frustrating even for well-seasoned testers. I aim to provide testers with a better understanding of connected systems like smart cities, connected enterprise and help them to apply their critical thinking to deal with uncertainty in their test objects.

12:30
  • Added to My Schedule
    keyboard_arrow_down
    Smita Mishra

    Smita Mishra - Careers in Testing – Identify your SuperPower

    schedule  12:30 - 01:15 PM IST place Online Meeting 1 people 29 Interested star_halfRate

    Most testers follow the career path their organisation is offering them. Sometimes, this path seems dated. Specially when one compares them to other organizations which seem to be taking testing more seriously or are atleast more advanced in their approach to people and technology.

    What could be the possible options beyond a certain point as a tester? Will present certain questions to the testers - What do you enjoy doing most? What comes naturally to you? How does the competition look like? What could you potentially learn? What are the alternates or extended skills you enjoy - data science more or engineering more or something else? What are your super powers? What could be the path of least resistance to make a shift. Will share leading examples of some "popularly successful" career.

    In the 2nd half of the talk - will get the audience to engage to understand the designations and potential career paths that interests them. Will discuss out specific ambitions and goals and share what has worked for others and what could work for them & how to go about it.

  • schedule  12:30 - 01:15 PM IST place Online Meeting 2 people 43 Interested star_halfRate

    The adoption of Artificial Intelligence is getting more traction, it is in need to enhance QA capabilities to cope up with these skills. Machine Learning is used extensively in retail applications for solving complex problems, one of them is solving the search relevancy. Showing the appropriate results for the user is important for the conversion rate to go high. As Machine Learning poses different challenges such as a Test Oracle, Fairness, Correctness and Robustness to do QA, We may need to follow different approaches and testing techniques to do the QA for Machine Learning models.

     

    Different Machine learning types such as Supervised and Unsupervised Models have different characteristics and are used for different types of problems. Though these solves different complex problems, Machine learning Models also a unit of software code that needs to be verified as a normal software system. When a Machine learning model is seen as a whole system, it may look complex and unsolvable. We can group them into small modules and verify for quality. Black box and White box testing techniques can be applied to verify the functionality. Data, Feature Engineering and Algorithms are the major part of the Machine Learning model. We will see how we applied different techniques to validate these.

     

    This talk is focused on viewing the Machine Learning software as a whole and performing the Quality Analysis for it. We look at how different is testing a machine learning model from typical software testing. We will discuss the challenges that came across, the Process involved in building an ML model. We take an example of Search Relevance for an explanation. We will dive into the areas where quality is assessed. The significant factors considered here are measuring Accuracy and Efficiency. We will look into the different black box testing techniques for different Algorithms. Let us also see how traditional testing is different from testing machine learning applications. I will go through different black-box testing techniques with examples following a live demo.

  • schedule  12:30 - 01:15 PM IST place Online Meeting 3 people 20 Interested star_halfRate

    The world of a software house is a constant search for compromise between quality and costs. In many cases, the cost-cutting starts from the test automation. Then you start to talk about ROI but recognize that numbers are not on your side. We were there and what we have found out is that only a complete change in our approach allows us to find common ground with our clients. I will reveal one detail from the presentation - we are not talking about test automation with clients anymore - as a result we do it more and more.​

    Are you surprised that success automatically generates new challenges which we further translate into opportunities? We had to reconsider our approach to the test automation environment, internal frameworks and the way we share them between projects, including code ownership, … And again, one simple but unobvious solution allows us to both deliver what we promise and to earn more on our projects.​

    As we have been reshaping our approach to the test automation, we had to change the way of delivery too. One of the main decisions was skip out the role of test automation engineer (or software developer in test). We decided to go with the whole team approach which is consistent with the way we sell it. ​

    Find it interesting? Join me and listen to our story about how we have transformed test automation.

13:15

    LunchBreak - 45 mins

14:00
  • Added to My Schedule
    keyboard_arrow_down
    Varuna Srivastava

    Varuna Srivastava / Wim Selles - Build a responsive typescript wdio framework

    schedule  02:00 - 03:30 PM IST place Online Meeting 1 people 29 Interested star_halfRate

    Participate in this workshop to learn how to put together the concepts of a wdio and typescript in a mocha framework that is scalable, robust, easy to read. We will be sharing our real-time experience of how we migrated our testing approach, design and framework when our application was migrating from javascript to a typescript architecture.

    You will leave with your very own example automation framework that demonstrates advanced principles of wdio using typescript automation design. We will integrate with allure reporting.

    Reference:

    https://github.com/varunatester/sel-workshop

    Key takeaways:

    1. A robust and scalable framework with the advanced principle for ui testing.
    2. A selection of design patterns for the designing framework.
    3. Concepts in designing your ui automation such as modeling data within your application and componentizing page objects.
    4. A framework which is responsive for web applications.
  • schedule  02:00 - 03:30 PM IST place Online Meeting 2 people 48 Interested star_halfRate

    With the seismic shift in industry and development of new technologies emerging, QA’s testing approaches are also changing, we must know the right strategies and algorithms to test. One of the latest technology emerging is Artificial Intelligence and Machine Learning. And its applications like Self driving cars, Virtual Assistants are everywhere. They have great impact in our life and most of our decisions, behaviour & destinations depend on them.

    So in this presentation/Workshop i would like to present all the ways/strategies/ challenges faced while testing AI/ML applications. Join me in creating a Machine Learning application from scratch and then take it to testing stage, creating edge case scenarios and validations.

    Time Management: To make sure that all people are upto date with with setup for hands-on, i will be sharing this document with the participants 12 days before in a temp slack channel, where they can share the progress and ask queries to resolve them quickly.
    *No internet is required for participants if they follow the setup doc.

  • schedule  02:00 - 03:30 PM IST place Online Meeting 3 people 30 Interested star_halfRate

    Copying of files in a time-stamped directory or by renaming the files itself is often used to keep older versions. This approach is very common because it looks so simple in the first place, but it is actually incredibly error prone.
    In today’s world of CI/CD pipelines where new versions of our production and test code has to be checked out at speed, it is unthinkable to have such a version control system in place. So what are you waiting for? Let’s go and get a proper version control system in place for your CI/CD pipelines. But not only the selection and usage of a version control system makes you knowledgeable, no, learning by making frugal experiments, failing, pairing and explaining it to others is what makes you a brave explorer and brings you to success.

    Start using the version control System Git can be daunting and frustrating. But knowing how Git works and how to use it to your advantage in your unique context is getting more and more essential if not even crucial. Especially if you want to work with CI/CD and take care collaboratively on the productive and test automation codebase in your team, you have to know Git.
    It is a long way to the top if you want to Rock ‘n’ Roll with Git and this workshop is the perfect start for it. In 90 minutes you will learn how to use the basic commands on the command line.

    Whether you have just started working with Git or want to refresh your existing basic knowledge or you are a novice and eager to learn it, this power workshop is for you. Come in and explore what Git feels like.

15:30

    Coffee Break - 30 mins

16:00
17:45

    Closing Talk - 15 mins

help