Testing Algorithms, LLC.
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • Tutorial
  • Contact Us

Self-maintaining Automated Test Scripts

4/3/2017

0 Comments

 
Picture


I grew up in a big city in India. A hot one!

Air-conditioners were not that uncommon in my childhood. However, there was one problem. It used to get very cold inside whenever the outside temperature dropped in the middle of the night. One had to wake up, get up and adjust the temperature before going back to bed.

Then the age of remote controls came. And the air-conditioners started coming with remote controls so that one wouldn't have to get up to adjust the temperature. However, they still had to wake up!

The evolution of test automation is very similar to the air-conditioners.

In our earlier days, we had record-playback which had to be adjusted every time there was a change in requirement (just like the outside temperature). Then came automation frameworks which made the maintenance of automation scripts easier. Just like the remote control.

But still a tester needs to wake up and adjust the inside temperature manually!

Then a fundamental concept of Physics changed the whole air-conditioning industry.

Thermostats!

Now we don't have to even wake up for a temperature adjustment!

We, at Testing Algorithms, are working on a Thermostat for test automation. We were able to link test automation scripts directly with the business requirements so that if one changes, the other changes automatically.

If you are interested to see our solution, watch our video.

If you want to read our previous article on this topic, please read this post.

​If you are looking for more information, please contact us.

0 Comments

Analytics for software testing using Defect data

12/11/2016

0 Comments

 

I have studied Statistics as a student and then spent most of my professional life doing software testing! It's always been my curiosity to find out how analytics can help in identifying various patterns while testing a software under development. One of the specific areas that always attracted me was the test management where I thought inferences about the performances of the development and testing teams could be drawn based on statistical data analysis. And, as part of that investigation, I found out that the information contained in the logged defects tell us many stories.

In one of my previous projects, we were using a standard defect management tool where individual defects were following the following workflow:

Picture

While looking at the defect metrics, I always made the following assumptions about existing defects:

1. Defects staying in New status for a long time indicates that there might be a problem with the defect triage process. Either triage should be performed more frequently, or there are many requirement gaps that are causing the confusion about whether an anomaly is a defect or not.

2. Defects staying in Open or Reopen status for a long time indicates that there might be a problem with the performance of the development team. Clearly, they are not able to fix defects effectively and efficiently. Either the development team is understaffed, or they need more technical training.

3. Defects staying in Fixed status for a long time indicates that there might be a problem with the performance of the code deployment team. It is an indication that the testers can't retest the defect yet because the updated code is not available in the test environment.

4. Defects staying in Retest status for a long time indicates that there might be a problem with the performance of the testing team. Clearly, they are not able to retest defects effectively and efficiently. Either the testing team is understaffed, or they need more training on testing processes or the application under test.

So I found out a way to measure how long the defects are staying in these statuses so that inferences can be made about the root cause and appropriate actions can be taken to improve the speed and/or quality of testing.

I noticed that the defect management tool captures and maintains the history of status changes for all defects. And also, a report containing that history can be exported to excel format. So, using my little coding knowledge, I was able to write a small program that parsed through that excel report and captured the duration of stays of all defects in various statuses. The output of my program looked something like this:

New: 1, 5, 3, 7, 5, 2, 3, 1, 7. (Note: There was total 9 defects)

Open/Reopen: 5, 8, 4, 2, 6, 8, 7, 8, 9, 4, 2. (Note: 1 defect got reopened 2 times)

Fixed: 4, 5, 2, 7, 4, 6, 8, 3, 2, 9, 8. (Note: 9 defects were fixed 11 times)

Retest: 3, 6, 7, 2, 6, 8, 4, 6, 3, 9, 8. (Note: 9 defects were retested 11 times)

I assumed a Gaussian or Normal distribution for all the duration data. A Normal distribution looks like this:

Picture

So, while doing testing, I constructed normal distribution for each of the 4 sets of data (i.e., for New, Open/Reopen, Fixed & Retest) on a daily basis to understand how this duration is increasing or decreasing over time. And to to do this, I used box plot diagrams. Following is a example where box plots are constructed on a daily basis:

Picture

Note that, the individual boxes are separate normal distributions and viewed from the top. Anyway, I kept plotting the distributions in this way for all of the 4 sets of data (i.e., for New, Open/Reopen, Fixed & Retest) separately. This helped me identifying the pattern and to guess the root cause of the problem so that I could take necessary actions to bring the speed and quality of testing on track, as and when needed.

I have tried this technique many times after that in both Waterfall & Agile projects and it helped me every time!

0 Comments

Still writing Given-When-Then statements manually?

12/2/2016

0 Comments

 
Picture


Over the last few years, many organizations have realized that software development is not a production process; rather it is more of a Research and Development process, as Agile Manifesto indicated in 2001. Since then, many approaches and techniques were crafted that have been successfully used and still being used in these organizations. 

Business Driven Development (BDD) is one of them, and probably the most popular today. It uses a Given-When-Then (i.e., Gherkin) statement that serves multiple purposes:

1. Elaborates the product feature with details and examples,
2. Aligns the Definition of Done (DOD) with business users,
3. Specifies the acceptance criteria of a user story, and
4. Accelerates Test Driven Development (TDD) using test automation.

However, the current practice in the industry is to write the Given-When-Then statements manually, as a team.

But what if we go up one more level of abstraction? In other words, can we capture the feature, or the behavior, of the software under development using a much simpler model and generate not only the Given-When-Then statements, but also the optimized manual test cases in Quality Center format, various types of requirement traceability matrices, test  case priority matrix, traditional use cases and process flow diagrams?

We, at Testing Algorithms, came up with a solution for that based on our years of research. It is a methodology that helps in structured thinking and representation of the application behavior so that the optimization and visualization can be taken up by a mathematical algorithm. Based on our case studies, this model is able to create all artifacts, including better quality test cases, ten times faster.

Agile is about innovation. And we believe in helping each other to grow as a community. So we have decided to let everyone use our research-based approach for free. Our website is https://www.testingalgorithms.com. 

If you want to try our requirement analysis and test case design  solution, please register using https://www.testingalgorithms.com/registration.html.

For any questions, contact us at https://www.testingalgorithms.com/contact-us.html.

0 Comments

Inferences from Testing Challenges Survey

7/11/2016

0 Comments

 
Picture


​Participants:

 
Participants of the survey were mostly:
1.      Part of dedicated testing teams (92% of the organizations)
2.      Following Agile methodology (89% of the organizations)
3.      People who are not in managerial positions (77% are Test Analysts/Leads)
4.      Located in Asia & Europe (76% of the participants)

5.      From bigger organizations (67% has >200 employees, 59% has >500 employees)


Observations:
 
6.      Requirement Analysis (64%) is a bigger area of improvement than Automated Test Execution (61%).

7.      Test Data Creation (45%) is a bigger area of improvement than Status Reporting (38%).

8.      Test Case Creation (42%) is a bigger area of improvement than Manual Test Execution (32%).

9.      Defect Management (26%) is not a big problem in most of the organizations.

10.   Combining points 6-9, it looks like the test execution, defect management and status reporting process has been streamlined quite a bit. However, requirement analysis & test planning is emerging as the primary pain area in many organizations.

11.   Only 41% organizations use Model Based Testing tools for test case creation and only 36% use a Test Data Management tool. This confirms our hypothesis in point 10.

12.   Automated Regression Testing has been adopted by most organizations (80%), but still 87% participants feel that testing takes more time. This also confirms our hypothesis in point 10.

13.   Even if 80% organizations are using Automated Regression Testing, 74% says that testing phase exceeds their budget. This indicates either or both of the following:

a.      Automated Regression Testing didn’t reach its break-even point;
b.      Automated Regression Testing is not cost effective.

14.   Most importantly, 92% of organizations fail to prevent defect leakage to the next testing environment, in spite of spending more time, spending more money and utilizing strong test management, test management and automated testing processes.


Conclusion:
 
The primary problems are in requirement analysis and test design processes. These continue to be manual processes since time began and its quality is believe to be dependent on human skills vs. automated techniques. There are various tools and techniques that are currently available that can make requirement analysis and test design much faster and better. It’s high time testing organizations start moving towards the direction of automated test case design. In fact, I predict automated test case design to be one of the most demanding topics in software testing area in the next decade.

0 Comments

How the quality of test execution should be measured?

4/23/2016

0 Comments

 
Picture


As a test manager, I always wanted to assess at the end of a testing project whether the quality of testing was good enough to ensure quality of the product. Literature of software testing suggests hundreds of metrics that can be used by the managers. However, not all of them are intended to measure the performance of the testing team.

Various test metrics can be divided into three primary categories: Product metrics (intended for Application Managers), Project metrics (intended for Project Managers) and Process metrics (intended for Test Managers). Here, our primary interest is Process metrics, which specifically measures the quality of testing, or in other words, the performance of the testers. Let's see which serves this purpose.

Is it total number of defects found?

No. This is neither of Product, Project or Process metric. This is because it doesn't tell you a story. Let's take an example where the testers found 200 defects. We can't infer anything from this number. The reason is, we don't know whether the testers did a good job or a bad job because we don't know how many defects are still unidentified.

Is it Test Execution Productivity?

This Process metric do measure the tester's performance, but in terms of speed, not quality. So, if a tester executes test cases in lightning speed but with errors, then it doesn't serve the purpose of testing in the first place.

Then what are the metrics that assesses quality of test execution? Well, in order to assess this, we should answer two following questions:

(A) Did the testers identify all the valid defects?
(B) Did the testers spend too much time and effort on invalid defects?

And, these two questions can be answered very easily with the following two metrics:

(A) Defect Leakage (to the next upper environment) = Total number of defects identified in next upper environment / Total number of defects identified (in lower + next upper environments)
(B) Defect Rejection Ratio = Total number of defects rejected / Total number of defects (valid + rejected)
​

Note that both the above metrics are "lesser the better". This is how I was measuring the quality of test execution for a long time. Request the readers to share thoughts on this as well.

0 Comments

    RSS Feed

    Author

    Abhimanyu Gupta is the co-founder & President of Testing Algorithms. His areas of interest are innovating new algorithms and processes to make software testing more effective & efficient.

    Archives

    April 2017
    March 2017
    January 2017
    December 2016
    November 2016
    October 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016

    Categories

    All
    Agile Testing
    Analytics In Software Testing
    Automated Test Case Design
    Business Model
    Defect Prediction
    Model Based Testing
    Outsourcing
    Quality Assurance
    Requirement Analysis
    Requirement Traceability
    Return Gift
    Status Report
    Test Approach
    Test Automation
    Test Coverage
    Test Efficiency
    Testing Algorithms
    Testing Survey
    Test Management
    Test Metrics
    Test Strategy
    Training
    Types Of Testing
    User Story

    View my profile on LinkedIn
© 2015 - 2018 Testing Algorithms, LLC.
​All rights reserved.
​
support@testingalgorithms.com
  • Home
  • About Us
  • Solutions
  • Case Study
    • Job Portal
  • FAQ
  • Blog
  • Video
  • Tutorial
  • Contact Us