Testing Algorithms, LLC.
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • RÉCIT
  • Contact Us

"Fail Probability of test case #7 is 69%!"

12/26/2016

1 Comment

 
Picture


​During execution of test cases, wouldn't a statement like the one above, for each pending test cases, be helpful?

Also, wouldn't it be even more helpful if these probabilities are revised automatically every time a test case is executed and its outcome is recorded?

Well, I tried to summarize some of the benefits below.

1. Test Prioritization: Fail probabilities assigned to each test case would enable testers to determine the order of execution in such a way that defects (especially the critical and high defects) are identified at the earliest. This will give the developers enough time to fix those defects.

2. Stopping Rule: Creation of rules like "stop testing if all remaining test cases have fail probabilities < 10%!" before the test execution starts would be possible. This would help the testing team to avoid over-testing, by objectively and quantitatively deciding when to stop. This would also help in determining the extent of regression testing for a release.

3. Effort Estimation: At the time of estimating the testing effort for a project and the number of resources required, usually a fixed percentage (e.g., 15%) of the total testing effort is assumed for defect re-testing. However, most of the time we under-estimate it and thus experience a tremendous amount of time-crunch towards the end of testing. With these fail probabilities; defect re-testing effort could be determined more accurately.

Makes sense?

Now, the question is, it is possible to calculate these fail probabilities?

And if so, how?

Very recently I helped a friend in the analysis of a completely different problem (not even related to software testing). An organization shared a list of their employees. Our task was to calculate the probability of attrition for each employees based on their demographic information, salary information and survey responses so that the organization can take necessary steps to prevent attrition.

This analysis was done using some statistical models and the accuracy of the models were very high in terms of predictability.

And, while doing this analysis, I discovered something else that would be help in software quality assurance!

I found that, at the time of test execution, the determination of fail probabilities for test cases, based on various attributes of the test cases, is the exact same problem!

We, at Testing Algorithms, are working on creating a framework where the fail probability of test cases (generated by our patent-pending automated requirement analysis and test case design solution) can be automatically calculated and revised during test execution.

If you are interested to know more, feel free to contact us. We would be happy to talk to you about this.

1 Comment

Analytics for software testing using Defect data

12/11/2016

0 Comments

 

I have studied Statistics as a student and then spent most of my professional life doing software testing! It's always been my curiosity to find out how analytics can help in identifying various patterns while testing a software under development. One of the specific areas that always attracted me was the test management where I thought inferences about the performances of the development and testing teams could be drawn based on statistical data analysis. And, as part of that investigation, I found out that the information contained in the logged defects tell us many stories.

In one of my previous projects, we were using a standard defect management tool where individual defects were following the following workflow:

Picture

While looking at the defect metrics, I always made the following assumptions about existing defects:

1. Defects staying in New status for a long time indicates that there might be a problem with the defect triage process. Either triage should be performed more frequently, or there are many requirement gaps that are causing the confusion about whether an anomaly is a defect or not.

2. Defects staying in Open or Reopen status for a long time indicates that there might be a problem with the performance of the development team. Clearly, they are not able to fix defects effectively and efficiently. Either the development team is understaffed, or they need more technical training.

3. Defects staying in Fixed status for a long time indicates that there might be a problem with the performance of the code deployment team. It is an indication that the testers can't retest the defect yet because the updated code is not available in the test environment.

4. Defects staying in Retest status for a long time indicates that there might be a problem with the performance of the testing team. Clearly, they are not able to retest defects effectively and efficiently. Either the testing team is understaffed, or they need more training on testing processes or the application under test.

So I found out a way to measure how long the defects are staying in these statuses so that inferences can be made about the root cause and appropriate actions can be taken to improve the speed and/or quality of testing.

I noticed that the defect management tool captures and maintains the history of status changes for all defects. And also, a report containing that history can be exported to excel format. So, using my little coding knowledge, I was able to write a small program that parsed through that excel report and captured the duration of stays of all defects in various statuses. The output of my program looked something like this:

New: 1, 5, 3, 7, 5, 2, 3, 1, 7. (Note: There was total 9 defects)

Open/Reopen: 5, 8, 4, 2, 6, 8, 7, 8, 9, 4, 2. (Note: 1 defect got reopened 2 times)

Fixed: 4, 5, 2, 7, 4, 6, 8, 3, 2, 9, 8. (Note: 9 defects were fixed 11 times)

Retest: 3, 6, 7, 2, 6, 8, 4, 6, 3, 9, 8. (Note: 9 defects were retested 11 times)

I assumed a Gaussian or Normal distribution for all the duration data. A Normal distribution looks like this:

Picture

So, while doing testing, I constructed normal distribution for each of the 4 sets of data (i.e., for New, Open/Reopen, Fixed & Retest) on a daily basis to understand how this duration is increasing or decreasing over time. And to to do this, I used box plot diagrams. Following is a example where box plots are constructed on a daily basis:

Picture

Note that, the individual boxes are separate normal distributions and viewed from the top. Anyway, I kept plotting the distributions in this way for all of the 4 sets of data (i.e., for New, Open/Reopen, Fixed & Retest) separately. This helped me identifying the pattern and to guess the root cause of the problem so that I could take necessary actions to bring the speed and quality of testing on track, as and when needed.

I have tried this technique many times after that in both Waterfall & Agile projects and it helped me every time!

0 Comments

    RSS Feed

    Author

    Abhimanyu Gupta is the co-founder & President of Testing Algorithms. His areas of interest are innovating new algorithms and processes to make software testing more effective & efficient.

    Archives

    April 2017
    March 2017
    January 2017
    December 2016
    November 2016
    October 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016

    Categories

    All
    Agile Testing
    Analytics In Software Testing
    Automated Test Case Design
    Business Model
    Defect Prediction
    Model Based Testing
    Outsourcing
    Quality Assurance
    Requirement Analysis
    Requirement Traceability
    Return Gift
    Status Report
    Test Approach
    Test Automation
    Test Coverage
    Test Efficiency
    Testing Algorithms
    Testing Survey
    Test Management
    Test Metrics
    Test Strategy
    Training
    Types Of Testing
    User Story

    View my profile on LinkedIn
© 2015 - 2018 Testing Algorithms, LLC.
​All rights reserved.
​
support@testingalgorithms.com
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • RÉCIT
  • Contact Us