Are you measuring your QA team's efficiency correctly?
'Quality with speed' is something everyone aspires to. But when it comes to QA teams, how do you define quality and how do you measure speed? Every QA team aims to deliver high-quality releases as soon as they can, but how do you define success for them?
In this session, we will talk about easily actionable ways to measure the effectiveness of testing teams—quality, efficiency, and impact. We also look at key testing metrics and performance indicators around defects, test cases, and test economics. We will also see how these numbers line up with company objectives and other softer aspects of organizational dynamics.
Outline/Structure of the Talk
-
Challenges around delivering quality with speed
-
Measuring sucess quantitatively
-
Key metrics you should consider
-
Aligning metrics with other teams and larger goals
Learning Outcome
The goal behind this session is to have actionable insights for QA teams around measuring their efficiency quantitatively.
Target Audience
QA Managers/Team Leads
schedule Submitted 2 years ago
People who liked this proposal, also liked:
-
keyboard_arrow_down
Chitvan Singh - From hours to minutes: How to write test cases for a faster regression suite?
45 Mins
Talk
Intermediate
Teams that release multiple times in a day cannot afford to have their regression suite run into hours. Most of these teams run their entire suites in less than 15 minutes. While infrastructure and configuration help bring this number down, the largest gains come from writing test cases that can cope with the requirements of a setup that runs so fast.
In this session, I’ll talk about:
-
How to identify test cases that need refactoring
-
How to write atomic test cases
-
How to use selenium locators smartly
-
Using product APIs to switch between user states
-
Using the right Selenium waits
-
Parallel execution and queuing
-
-
keyboard_arrow_down
Praveen Umanath - State-of-the-art test setups: How do the best of the best test?
20 Mins
Talk
Intermediate
The best engineering teams are releasing code hundreds of times in a day. This is supported by a test setup that is not just fast, but robust and accurate at the same time.
We look at data (anonymized) from millions of tests running on BrowserStack, to figure out the very best test setups. We also analyze the testing behavior of these companies—how do they test, how frequently do they test, how many device-browser-OS combinations do they test on. Do they gain speed by running more parallels or leaner test setups?
Finally, we see how these steps help these teams to test faster and release continuously, and how it ties-in to the larger engineering strategy.
Public Feedback