Test automation is all about creating (and running) as many test scripts as possible. At least, that’s the simplistic view. But as we will see, it isn’t quite as simple as that. The more tests your team automates, the more they will need to maintain. Eventually, most teams find themselves drowning in test debt, due to the maintenance burden. Here, we explain how this comes about and how using AI can help you avoid the problem altogether.
Test automation is an essential part of the modern software development process. Without it, you simply cannot release fast enough to keep up with your competitors. Moreover, modern applications are so complex that manual testing becomes prohibitively expensive in terms of time and resources. Fortunately, test automation allows you to run many of your tests autonomously, 24/7. This is particularly useful for regression testing and can help speed up delivery significantly.
Test automation is far from new. Even before Selenium appeared in 2004, companies were building their own custom test automation suites. APIs have long been tested automatically, and unit testing has always been a best practice for developers. However, Selenium was the first general purpose framework for UI testing. With its advent, companies could start to automate their regression suite for the first time.
Selenium tests require you to create a test script. This is used to tell the Selenium Webdriver exactly what to do in the UI. For instance, find a specific button and click it. Or enter a given text string in a form. The script then checks whether the result is as expected. For instance, checking if a new page has been loaded. Or seeing if an item has now been added to a shopping cart. The problem is, creating these test scripts is often slow and painstaking, at least to start with. You need to test and debug the script and then adapt it to work on every browser and device that the application may run on.
There are two main ways to speed up test creation.
Test automation only really helps if enough of your tests are actually automated. Not all tests are suitable for automation but typically, all your regression tests should be suitable for automation. Usually, the way to measure how many tests are automated is with test automation coverage. This is simply the proportion of all UI tests that have been automated. Many QA managers believe the aim should be to get this number as high as possible. But as we will see, that may be an over-simplistic view.
One of the biggest problems with any Selenium-based tests is how fragile the underlying script is. This is true whether it was written manually, created with a test recorder, or generated under the hood by an AI wrapper. The issue is, the script needs to know exactly which elements to select in the UI each time. That may sound simple, but these coded selectors (like element ID or XPath) can actually change every time your UI or application is updated. The resulting change often triggers a series of test failures. These are false positives. That means they don’t indicate an actual bug. Rather, they indicate that the test no longer works. The upshot is, one of your test engineers has to try and fix the test so it passes again.
This process has come to be called test maintenance. It is viewed as just one of those things you have to do. But it is actually really problematic when you look into it. If you talk to teams with a large number of automated tests they will tell you that test maintenance is the single biggest time sink for their test engineers. Indeed, it is not unusual for test engineers to spend over 60% of their time on this one task. This maintenance trap ends up slowing down the pace of automation and may even cause it to stall. The end result is known as test debt.
Test debt happens when your team is unable to complete all the required maintenance before the next release or update happens. As a result, the team is constantly chasing its tail trying to create new tests, fix old tests, and analyze actual bugs. At this point, QA managers often turn to solutions that help speed up test creation. After all, if you remove one task from the test engineers, that must help matters, right?
Sadly, the reality is you will end up in a catch 22. Creating more tests can actually make matters worse because test maintenance is the dominant factor for time. You may save 90% of the time needed for test creation, but you just created a whole load more tests that are going to break next time the UI changes!
At this point, teams can reach the point of no return. On the one hand, they can keep on top of adding tests for new features. On the other they can try and keep up with the requirements for maintaining existing tests. Sadly, they can’t usually do both. They are in fact stuck between the apocryphal rock and hard place. Usually, the only way out is to start skipping tests in the regression suite and pass more testing on to the manual testers.
Here at Functionize, we take a different approach to things. We have always focused our effort on eliminating test debt through the smart application of AI. Our platform lets you easily create automated tests that automatically work on almost any browser and platform. As the tests are created, the underlying AI system is building up a complex model of your application. Over time, this model gets more and more reliable. As a result, you will see test maintenance cut by over 80%. In turn, that means test debt is completely eliminated and automation coverage can grow at a healthy pace. So, while other approaches may appear to be faster for test creation, overall, your team will see a far greater ROI with Functionize. To find out more, book a demo today.