Using Gap Analysis In The Testing Process To Align Devs and Testers

Gap analysis in testing helps you identify what code is being released untested and thus can help you focus your testing resources more effectively.

Gap analysis in testing helps you identify what code is being released untested and thus can help you focus your testing resources more effectively.

October 4, 2018
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Gap analysis in testing helps you identify what code is being released untested and thus can help you focus your testing resources more effectively.

It is a widely held belief that developers and testers get on like chalk and cheese. Developers often look down their noses at testers, viewing them as an inflexible and unhelpful drag on developing new code who fail to provide clear details in their bug tickets. Equally, testers often view developers with irritation for failing to keep them informed of code changes and for pushing new changes during test cycles. This is why gap analysis in the QA testing process may be especially beneficial for aligning both devs and testers.

While this view may be apocryphal, it is often the case is that developers and testers don’t communicate well. Often times communication only happens via management (project managers and line managers) or via tickets in Jira. The result is that testing often turns out to be inadequate. In particular, all too often new code is poorly tested or not tested at all. Research has shown that this untested new code is the biggest source of future bugs and failures. This is where the concept of Test Gap Analysis comes into play. 

In this blog, we will explore what Test Gap Analysis is, show how it can help identify the code that needs to be tested and explore whether this approach is the panacea that can bridge the gap between devs and testers.

What is Test Gap Analysis?

When you first develop a software system, it’s easy to be sure you are testing the whole system. Testers are able to take the code specifications and use a combination of planned and exploratory testing to ensure they test every moving part of the codebase. However, most software is not static. Companies either make monolithic periodic releases of long-lived code or they employ some level of continuous integration and continuous deployment (CI/CD). In either case, the codebase is constantly changing. However, all too often the test cases aren’t updated at the same rate. Sometimes the testers simply don’t know a code change has been made. Other times, the code change is pushed just after a particular set of tests have been run. Those tests appear to have passed, but the code change may have broken them.

Test Gap Analysis is the process of identifying these gaps where new code has been deployed but hasn’t been tested yet. This requires a combination of static analysis of all code changes and dynamic analysis of all current testing. By comparing these two analyses, you can easily see where any gaps are. That is, areas of new code that have been adequately tested. Typically, this is done by plotting the code using a form of tree diagram where the code is divided into functional blocks, below each block are the constituent classes, below those are the actual methods and functions. At each level in the hierarchy, the relative size of the block indicates the amount of code. By overlaying the tree showing the code changes with the tree showing the current state of testing it is easy to spot the areas where there is missing test coverage. 

tree map example
A tree map showing unchanged code (grey), revised/new code that is tested (green), revised code that is untested (orange) and new code that is untested (red).

 

Why might Test Gap Analysis matter in the testing process?

In a 2013 paper, researchers from the Munich Technical University ran a study to look at the correlation between new, untested code and future software bugs. They worked with Munich Re (a large insurance company), monitoring their long-lived IT system through 2 releases. During release, they established which code was new and also looked at which code was tested and which code was released untested. After that, they monitored the system for a long time and tracked all reported bugs back to the original source. They discovered two key things. Firstly, roughly a third of all code was being released untested. Secondly, when they traced the bugs they discovered overall between 70 and 80% of all bugs were in untested code. The graph below shows this more clearly.

Bridging the gap between devs and testers. is test gap analysis a solution?

 

As you can see, the old tested code had no bugs. The new tested code had between 22 and 30% of the bugs and the remaining bugs were distributed fairly evenly between new untested code and old untested code. However, since the new code only accounted for 15% of the overall code, you can see that the new untested code accounted for a disproportionately high number of bugs. 

Test Gap Analysis is designed to identify this untested new code. However, it can also help you in other ways. For instance, because it monitors what you are testing, it may identify areas of existing code that are outdated (e.g. they are still being tested, but aren’t actually being called by any code). It can also highlight which areas you need to concentrate your testing resources on. By running it regularly, managers can improve test planning, focussing on testing new code, and trying to ensure even test coverage.

Devs and Testers aligned through Test Gap Analysis

Test Gap Analysis is clearly a powerful concept. But it’s also clear that not all teams will benefit equally from it. The teams that will benefit most are those who are maintaining long-lived codebases with periodic monolithic releases. In long-lived codebases, developers are often working on code that was written by other people. Testers may be relying on test plans produced several versions before. Taken in combination, these factors can mean that no one is quite clear which code is being tested or how that code interacts with other parts of the system. Here, TGA can allow the testers to see exactly what code has changed which allows them to focus on this. They can also identify code in the existing system that is untested.

Teams using CI/CD may also benefit from this approach as it will allow the testers to quickly identify exactly what code is changed and so allow them to concentrate on that. It can also avoid the issue mentioned above where a piece of code is altered straight after it has been tested and then gets released with the changes untested. 

On the other hand, teams that are working on new or short-lived code will benefit less, since, by definition, most code will be untested at first. Here it is important to use standard test methodologies to ensure your testing is good. It may, however, be useful for such teams to start monitoring the test coverage using TGA since that will allow them to avoid future issues.

What are the potential issues?

There are a few issues with TGA. One of the biggest relates to the fact that it can’t tell you which code is actively being called in the codebase. Developers often add new code in preparation for future releases, but since this is inactive, the test suite cannot call it. As a result, this code will always show up as new untested code. Equally, many large codebases contain blocks of old or orphaned code. These should be cleaned up periodically, but again these will distort the picture for the Test Gap Analysis.

Another important observation is that just because a piece of code is tested doesn’t preclude that code from triggering bugs elsewhere. Some of the worst bugs are ones where a small change in one piece of code causes a function or method to fail in a totally unrelated code block. So, it is vital to keep doing both exploratory testing and regression testing. What TGA can do is identify areas in your existing codebase that aren’t being properly tested during your regression testing.

What other alternatives help to bridge the gap?

Test Gap analysis is definitely a useful tool for some teams. However, there are other ways to avoid a mismatch between what your testers are testing and what your coders are coding. One way is to use more intelligent test automation that is able to identify where and when things have changed, both on the frontend but also, importantly, in the backend. Here at Functionize, our tests use intelligent fingerprinting coupled with machine learning and self-healing to reduce test maintenance. This allows the tests to proactively adapt to frontend changes as well as to monitor things like server calls, response times and whether buttons/functions are actually visible. This means your testers won’t get caught out by changes made in the backend code or design/CSS changes that alter the frontend layout.

Our intelligent system can also create tests by monitoring how users are interacting with the system. This helps ensure there are fewer gaps in the test coverage. One of our newest tools allows you to write new tests in plain English. These are parsed by our Natural Language Processing tool and converted into usable tests. This means that during development, devs can simply specify what will need to be tested using normal English, thus closing the gap between the two disciplines further.

Conclusions

Test Gap Analysis helps you identify what code is being released untested and thus can help you focus your testing resources more effectively. Unsurprisingly, it turns out that untested code is far more likely to contain bugs and so any tool that can help ensure this code is properly tested is useful. However, as we have seen TGA can only supplement existing best practice. It is vital to keep up your regression testing and exploratory testing. Many of the benefits of TGA can also be achieved by using intelligent testing tools. But overall, the main thing to avoid is isolating your test team from the work of your devs. To misquote an old adage, you should make sure the left hand knows what the right hand is doing!