Testing Algorithms, LLC.
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • Tutorial
  • Contact Us

How the quality of test execution should be measured?

4/23/2016

0 Comments

 
Picture


As a test manager, I always wanted to assess at the end of a testing project whether the quality of testing was good enough to ensure quality of the product. Literature of software testing suggests hundreds of metrics that can be used by the managers. However, not all of them are intended to measure the performance of the testing team.

Various test metrics can be divided into three primary categories: Product metrics (intended for Application Managers), Project metrics (intended for Project Managers) and Process metrics (intended for Test Managers). Here, our primary interest is Process metrics, which specifically measures the quality of testing, or in other words, the performance of the testers. Let's see which serves this purpose.

Is it total number of defects found?

No. This is neither of Product, Project or Process metric. This is because it doesn't tell you a story. Let's take an example where the testers found 200 defects. We can't infer anything from this number. The reason is, we don't know whether the testers did a good job or a bad job because we don't know how many defects are still unidentified.

Is it Test Execution Productivity?

This Process metric do measure the tester's performance, but in terms of speed, not quality. So, if a tester executes test cases in lightning speed but with errors, then it doesn't serve the purpose of testing in the first place.

Then what are the metrics that assesses quality of test execution? Well, in order to assess this, we should answer two following questions:

(A) Did the testers identify all the valid defects?
(B) Did the testers spend too much time and effort on invalid defects?

And, these two questions can be answered very easily with the following two metrics:

(A) Defect Leakage (to the next upper environment) = Total number of defects identified in next upper environment / Total number of defects identified (in lower + next upper environments)
(B) Defect Rejection Ratio = Total number of defects rejected / Total number of defects (valid + rejected)
​

Note that both the above metrics are "lesser the better". This is how I was measuring the quality of test execution for a long time. Request the readers to share thoughts on this as well.

0 Comments

Why are most Test Execution Status Reports useless?

4/3/2016

1 Comment

 
Picture


Because they focus on how much has been accomplished till date, as opposed to how much needs to be accomplished in the available time and what is stopping or might stop the progress towards that target.

Here, I am primarily talking about waterfall projects that have dedicated test execution phases (separately for System Testing, User Acceptance Testing, etc.)with planned start and end dates. As a Test Manager, I always used to create and publish daily and weekly status reports (for ongoing projects) that were expected to be used by senior management to identify issues and risks, if any. However, in spite of publishing status reports on a regular basis, in almost every project we gave a surprise (and not a good one!) to the senior management in the last week of test execution that it can't be completed in time because of this, this and this reason. (Mostly, "it's developer's fault!")

A typical status report that I used to publish contained information around total number of test cases planned, executed, passed and failed till date, grouped by modules or functionalities. It also had information on the total number of defects identified, how many were valid defects among those, how many were in open, being retested and closed status etc., grouped by severities and priorities of defects. 

But what story does this status report tell us? Can the situation be quantitatively assessed in terms of how far ahead or behind the progress of test execution is and why?

In my experience as a Test manager, I have figured out a way to handle this situation. My status reports do not brag about what has been done till date. Instead, it focuses on the following things:

•    How much needs to be done from today till the planned execution end date? 
•    How feasible and practical that plan is in terms of the workload? Do we need more people to meet the target? 
•    If we are behind the schedule, what are the root causes? 
•    How can we measure the impacts of the root causes? And, how can we improve on them?

So, my test execution status report has the following metrics:​

•    Expected Test Execution Productivity (as of date), i.e., total number of test cases remaining to be executed or retested versus total number of days remaining per person per day. 
•    Percentage increase (or decrease) in Expected Test Execution Productivity with respect to the productivity assumed originally (i.e., at the beginning of test execution). A cut-off value needs to be defined at the beginning of the execution that, if this is +20% or more, then we need either more people or more time.

Now, if the progress is behind schedule, there are primarily four possible areas of root cause, and the corresponding metrics, as follows:

•    Testers are either slow or under-staffed. A good measure of this is traditional Test Execution Productivity, i.e., how many test cases were executed per person per day. If this productivity looks good, then definitely the testing team is under-staffed and there was a problem during resource planning at the beginning of the project. Another measure of this is Defect Rejection Rate, i.e., what percentage of identified defects are invalid. This is a good indicator of how much time the testers are wasting in identifying and analyzing non-defects. 

•    Developers are not able to fix defects quickly, which is blocking a number of test cases and stopping the testers to proceed. Two good measures of this is Defect Aging (i.e., average number of days in which defects are closed) and Defect Reopen Rate (i.e., what percentage of defects are being reopened by testers after retest). There can be two similar reasons why this could happen: either the developers are under-skilled or they are under-staffed. 

•    Requirements are changing very frequently, which is shifting the focus of the testers to go back and adjust the test design. A standard measure of this is Requirement Traceability Index, i.e., what percentage of the requirements are added, deleted or modified in the life cycle of the project. 

•    Test Environment is not stable. Environment Stability Index is a good measure for this in terms of how many days during test execution the environment was down.

If you have already noticed, the four root causes are related to four different teams participating in the project: testing team, development team, Business Analysts and Environment Support team.

To summarize, the test execution status reports have much more information and usefulness if it contains the various metrics described above.

Contact Testing Algorithms at support@testingalgorithms.com for refining your testing processes to make them more meaningful for all project stakeholders.

1 Comment

    RSS Feed

    Author

    Abhimanyu Gupta is the co-founder & President of Testing Algorithms. His areas of interest are innovating new algorithms and processes to make software testing more effective & efficient.

    Archives

    April 2017
    March 2017
    January 2017
    December 2016
    November 2016
    October 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016

    Categories

    All
    Agile Testing
    Analytics In Software Testing
    Automated Test Case Design
    Business Model
    Defect Prediction
    Model Based Testing
    Outsourcing
    Quality Assurance
    Requirement Analysis
    Requirement Traceability
    Return Gift
    Status Report
    Test Approach
    Test Automation
    Test Coverage
    Test Efficiency
    Testing Algorithms
    Testing Survey
    Test Management
    Test Metrics
    Test Strategy
    Training
    Types Of Testing
    User Story

    View my profile on LinkedIn
© 2015 - 2018 Testing Algorithms, LLC.
​All rights reserved.
​
support@testingalgorithms.com
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • Tutorial
  • Contact Us