Testing Algorithms, LLC.
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • Tutorial
  • Contact Us

The "Leather Seat Dilemma"...

6/28/2016

2 Comments

 
Picture

I was buying a car for my wife a few years back. We did a lot of homework and decided on the brand, model and color of the car that we wanted. While entering the showroom, we were pretty sure about what exactly we were looking for and we thought it would be a very quick transaction.

Then the dealer started giving us hundreds of options beyond the brand, model and color of the car. Tilted glasses or not, moonroof or not, alloy wheels or not, and the list went on and on. The one that caught our attention was, whether leather seats or not. Although the earlier questions seemed more or less straight forward, this question was a tough one for both of us.

After a long analysis, we finally decided to go for it!!!

And then came this additional information from the dealer. For the mid west weather, we would definitely need heated seats if we want leather ones.

Eventually, we didn't buy the one with leather seats because, considering the additional value we were getting, this double investment was not worth to us.

Sometimes, in software testing world, we do not realize that we make similar uninformed choices. We first buy licenses of expensive test design, test automation or test management tools and then we pay a lot of money either to hire skilled people or to train existing associates so that the tool can be used in the organization; without even analyzing whether this double investment is worth the additional value or not.

Testing Algorithms, LLC. offers a simple automated test case design solution that doesn't need any specialized and technical skills that a Model Based Testing tool needs. Even a non-technical person can use this methodology to create better test cases 10x faster.

Visit www.testingalgorithms.com for more det
ails.

2 Comments

Fishing and Software Testing - Another Analogy

6/10/2016

0 Comments

 
Picture

Do we know how a fisherman estimates the total number of fishes in a lake?

Here are the steps they follow:

1. They add certain number of fishes, uniquely marked, to the lake.
2. After a few days, they come back and capture a fixed number of fishes.
3. In their bucket of captured fishes, they determine the proportion of marked fishes.
4. They repeat the same steps for multiple times.

Let's use the following notations to explain this mathematically:

F = Total number of fishes in the pond
M = Number of marked fishes added to the lake
f = Number of fishes captured in a single iteration
m = Number of marked fishes captured in a single iteration
x = Estimated proportion of marked fishes in one iteration.

Clearly, for a single iteration, x = m/f. 

Therefore, an estimated number of fishes (i.e., F) in the pond could be determined using the below equation:

M/(M+F) = m/f

Or, in other words, F = M*(f-m)/m.

However, if we consider n trials (i.e., iterations), then the entire problem boils down to an statistical estimation problem for Binomial distribution where x follows Binomial(n, p = M/(M+F)).

Note that in the above picture, n = 20.

But the question is, how does this prediction method apply in the context software testing?

Well, the Business Analysts and/or developers can intentionally inject some defects without informing the testers. Then, by monitoring the defects being incrementally found by the testers and applying the above formula, total number of defects might be predicted.

Thoughts?

0 Comments

Why are most Test Execution Status Reports useless?

4/3/2016

1 Comment

 
Picture


Because they focus on how much has been accomplished till date, as opposed to how much needs to be accomplished in the available time and what is stopping or might stop the progress towards that target.

Here, I am primarily talking about waterfall projects that have dedicated test execution phases (separately for System Testing, User Acceptance Testing, etc.)with planned start and end dates. As a Test Manager, I always used to create and publish daily and weekly status reports (for ongoing projects) that were expected to be used by senior management to identify issues and risks, if any. However, in spite of publishing status reports on a regular basis, in almost every project we gave a surprise (and not a good one!) to the senior management in the last week of test execution that it can't be completed in time because of this, this and this reason. (Mostly, "it's developer's fault!")

A typical status report that I used to publish contained information around total number of test cases planned, executed, passed and failed till date, grouped by modules or functionalities. It also had information on the total number of defects identified, how many were valid defects among those, how many were in open, being retested and closed status etc., grouped by severities and priorities of defects. 

But what story does this status report tell us? Can the situation be quantitatively assessed in terms of how far ahead or behind the progress of test execution is and why?

In my experience as a Test manager, I have figured out a way to handle this situation. My status reports do not brag about what has been done till date. Instead, it focuses on the following things:

•    How much needs to be done from today till the planned execution end date? 
•    How feasible and practical that plan is in terms of the workload? Do we need more people to meet the target? 
•    If we are behind the schedule, what are the root causes? 
•    How can we measure the impacts of the root causes? And, how can we improve on them?

So, my test execution status report has the following metrics:​

•    Expected Test Execution Productivity (as of date), i.e., total number of test cases remaining to be executed or retested versus total number of days remaining per person per day. 
•    Percentage increase (or decrease) in Expected Test Execution Productivity with respect to the productivity assumed originally (i.e., at the beginning of test execution). A cut-off value needs to be defined at the beginning of the execution that, if this is +20% or more, then we need either more people or more time.

Now, if the progress is behind schedule, there are primarily four possible areas of root cause, and the corresponding metrics, as follows:

•    Testers are either slow or under-staffed. A good measure of this is traditional Test Execution Productivity, i.e., how many test cases were executed per person per day. If this productivity looks good, then definitely the testing team is under-staffed and there was a problem during resource planning at the beginning of the project. Another measure of this is Defect Rejection Rate, i.e., what percentage of identified defects are invalid. This is a good indicator of how much time the testers are wasting in identifying and analyzing non-defects. 

•    Developers are not able to fix defects quickly, which is blocking a number of test cases and stopping the testers to proceed. Two good measures of this is Defect Aging (i.e., average number of days in which defects are closed) and Defect Reopen Rate (i.e., what percentage of defects are being reopened by testers after retest). There can be two similar reasons why this could happen: either the developers are under-skilled or they are under-staffed. 

•    Requirements are changing very frequently, which is shifting the focus of the testers to go back and adjust the test design. A standard measure of this is Requirement Traceability Index, i.e., what percentage of the requirements are added, deleted or modified in the life cycle of the project. 

•    Test Environment is not stable. Environment Stability Index is a good measure for this in terms of how many days during test execution the environment was down.

If you have already noticed, the four root causes are related to four different teams participating in the project: testing team, development team, Business Analysts and Environment Support team.

To summarize, the test execution status reports have much more information and usefulness if it contains the various metrics described above.

Contact Testing Algorithms at support@testingalgorithms.com for refining your testing processes to make them more meaningful for all project stakeholders.

1 Comment

    RSS Feed

    Author

    Abhimanyu Gupta is the co-founder & President of Testing Algorithms. His areas of interest are innovating new algorithms and processes to make software testing more effective & efficient.

    Archives

    April 2017
    March 2017
    January 2017
    December 2016
    November 2016
    October 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016

    Categories

    All
    Agile Testing
    Analytics In Software Testing
    Automated Test Case Design
    Business Model
    Defect Prediction
    Model Based Testing
    Outsourcing
    Quality Assurance
    Requirement Analysis
    Requirement Traceability
    Return Gift
    Status Report
    Test Approach
    Test Automation
    Test Coverage
    Test Efficiency
    Testing Algorithms
    Testing Survey
    Test Management
    Test Metrics
    Test Strategy
    Training
    Types Of Testing
    User Story

    View my profile on LinkedIn
© 2015 - 2018 Testing Algorithms, LLC.
​All rights reserved.
​
support@testingalgorithms.com
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • Tutorial
  • Contact Us