Test Gap Analysis

The Test Gap analysis combines information about code changes and test coverage
and uncovers critical changes that no test has covered.
This analysis enables you to prioritize testing efforts and
detect defects early in the release cycle instead of in production.

Background
Where do defects occur in our code?

Defects often occur in the code areas that have been changed recently. Test Gap analysis shows which changes have not yet been tested. As a result, such defects can be avoided.

Untested changes
Defects remain undetected due to Test Gaps

It is no secret that new or changed code often causes defects. Therefore, testers attach great importance to thoroughly testing new and changed functionality. Unfortunately, this does not work reliably in practice.

In a study with Munich Re, we accompanied two releases of a business information system. In both releases over 50% of the changes were not tested. These untested changes had a 5x higher defect probability.

This gap in testing, which left over half of the changes untested, would have gone undetected until release without tool support - even though Munich Re uses a very structured, multi-stage testing process. The reason for these Test Gaps is that information about changes is spread across different systems and teams. In addition, it is usually incomplete.

Test Gap analysis shows you all untested changes, so you can close these gaps before release.

Automation
Detect Test Gaps automatically

Test Gap analysis combines multiple data sources to detect all untested changes fully automatically.

Code changes are identified by comparing different versions in the version control system (e.g. Git or Subversion).

Tests are analyzed with so-called coverage profilers, which record in the background which code is being executed. They can be used for all types of tests, including manual tests. This is possible for all commonly used technologies, platforms and programming languages.

Test Gap analysis combines the information about code changes and tested code to show you all untested code changes.

From a manager's point of view
Test Gaps for the entire system

As a test manager, I want to see which Test Gaps occur anywhere in my code base before a new release.

This allows me to make a risk-based decision which Test Gaps need to be closed before the release.

For this, the Test Gap analysis identifies all changes between the last release (currently running in production) and the new release.

From the tester's point of view
Test Gaps for individual features

As a tester, I like to focus my daily work on the Test Gaps in the functionality I am currently testing.

Test Gaps at the feature level show, which code has changed in the implementation of a feature but has not been tested yet.

In our own development process, a part of the "Definition of Done" of a feature is, that it may have no Test Gaps .

Experience exchange
Would you like to exchange experiences on Test Gap analysis?

Any complex analysis raises questions. Is it applicable to you at all? What experiences have other companies in your industry had with Test Gap analysis? Are the technologies you use supported? etc.

I have dealt with Test Gap analysis for over 10 years. In research papers, speaking at industry conferences, talking to testers and test managers, and working with customers who have been using it for years.

I'm happy to chat with you about Test Gap analysis.  Please contact me, I am looking forward to our exchange :-)