Test debt explained. Why your team has no spare test capacity.

Test debt is a real problem for most QA teams. It eats your capacity and results in less and less productive testing being done as I explain here

Test debt is a real problem for most QA teams. It eats your capacity and results in less and less productive testing being done as I explain here

July 7, 2021
Gary Messiana

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Test debt is a real problem for most QA teams. It eats your capacity and results in less and less productive testing being done as I explain here
Many years back, I realized there was a huge problem in test automation. Not only were the tools woefully outdated, they also were driving teams into test debt. Here, I explain what this means and show how it is really impacting your company. I’ve also worked with my team to create a useful infographic that illustrates the problem. - Tamas Cser

The problem with test automation

Test automation is essential to keep up with the pace of modern application development. However, over the years, test automation tools have failed to keep up. They are stuck in the dark ages compared with the tools your development team uses. The problem is, most legacy test automation relies on test scripting. But test scripts are notoriously prone to breaking as your application changes. The result is, your team starts to spend more and more time on test maintenance and less and less on creating tests. In turn, they lose focus on their central aim, which is ensuring your software is bug-free and reliable. I call this problem test debt.

Test capacity

Every team has a finite capacity they can spend on test automation. They have to split this time between creating tests and analysing test results. When a team starts automating tests, they have plenty of spare test capacity. But that rapidly reduces as they become familiar with the tools and start to automate more tests. More automated tests means more test results to analyse. Eventually, they will reach their overall capacity. This idealized case is shown in the graph below.

Managers assume their test team divides their time between test creation and reviewing test results. The upper limit is the overall capacity of your team.
Managers think QA teams split their time between test creation (green) and test analysis (blue)

Test maintenance

So, all seems well with your QA team. They are automating more tests and increasing your test coverage. But there’s a fly in the ointment. Test scripts rely on selectors (like element ID or xpath) to find elements in your UI. They can then interact with that element and check what happens. For instance, you can tell the script to click a specific button or enter text in a certain field. The problem is, these selectors change unpredictably as your application changes. Even simple styling or layout changes can result in the script choosing the wrong button, or entering text in the wrong field. The upshot is, when you update your app, your tests will suddenly fail. In a few cases, there may be a genuine bug. But more often than not, the test just needs to be fixed. As they automate more tests, they spend more and more time on this test maintenance, as shown in the graph below.

Your team has to spend an increasing share of time on test maintenance

Factoring in maintenance

Eventually, the team can end up spending half their time on test maintenance. If you now add this on to the idealised graph, you can see you rapidly hit a problem. The overall work required significantly exceeds their capacity. This is known as test debt. Teams that are in test debt are going to really struggle badly.

Data driven testing diagram
When you take test maintenance into account, you actually see a backlog of testing

The impact of test debt

Test debt is one of the most damaging problems your QA team faces. When you are in test debt, you have to make some really tough decisions. You are stuck between a rock and a hard place and need to sacrifice existing maintenance or new coverage:

  • Focus your attention on fixing the broken tests and just rely on manual testing for all your new features. That works OK if you have spare manual test capacity. The problem is, it’s probably not sustainable, since every new feature will break more and more tests.
  • Continue to increase test coverage and ignore some of the broken tests. That is a massive gamble though, since there was a good reason to automate those tests!

Worse still, you eventually start to see your test coverage decline, as the graph on the right shows. This happens because as you develop new features you have to keep developing new tests. But your team can’t automate these new tests while they’re fixing existing tests. So the team grows dependent on manual testing, which is another huge time sink.

Eventually, QA teams find they have become the roadblock to releasing new features. Seeing this problem made me realise that there has to be a better way to do things. To me, the obvious answer was to use machine learning to try and reduce the need for test maintenance. And so I founded Functionize.

How Functionize solves test debt

Functionize has been an AI company from day one. Our approach has been to apply machine learning and other AI approaches to solve the problems with test debt. Specifically, we tackle it in three ways:

  • Reducing test maintenance. Every time you run a test on the Functionize platform, it records a vast amount of data. This includes details of objects on the page, hidden objects, calculated CSS, API calls, timings, and more. We use this data to create a detailed ML model of your entire application. In turn, this allows us to use Smart Element Recognition to avoid the need for routine maintenance. In effect, our model works out what changed in the UI just like your manual testers would. If a “Buy now” button moves, changes style and gets changed to “Add to cart” a human still knows it’s the same button. Well, so does our system. That means many fewer tests fail just because your UI was updated.
  • Speeding up test creation. The other way to increase test coverage is to reduce the time needed to create each test. This will mean your team can use the same test capacity to create more tests. Our Architect smart recorder allows you to create AI-powered tests in just minutes. The tests automatically work on any browser and platform. By contrast, creating a test script can take hours, and then it has to be refined for each platform or browser you want to test. All this is only possible because we combine different AI approaches including deep learning, natural language processing, and computer vision.
  • Simplifying test analysis. One of the best things with AI is the ability to apply techniques like computer vision. This allows us to simplify test analysis no end through our visual testing approach. We record screenshots for every test step and every single test run. We are able to use these to highlight parts of your UI that have changed more than expected since the last run. You can also choose to validate specific parts of the screen (for instance, your logo). Alternatively, you can do an in-depth visual verification of the entire UI, including CSS as well as visual elements. Overall, this streamlines test analysis and makes it accessible to everyone.
Functionize cuts test maintenance by 80% and speed up test analysis. You significantly increase the number of tests you can automate before capacity is an issue.
Functionize solves test debt, and brings many other benefits that boost QA productivity


Overall, Functionize allows you to cut test debt, increase test coverage and keep your team focused on delivering better products, faster. All this is explained on our new infographic - download it now.