The test coverage problem
Avoiding the catch 22 of test automation
Every enterprise wants to maximize their test coverage. But, the more tests you have, the more time you spend on test maintenance instead of writing new tests. How can you avoid this catch 22? In this blog, we show how the Functionize intelligent test agent solves this problem.
Test automation is great! It has undoubtedly transformed the world of software and acted as an enabler for innovation. But test automation itself has lagged behind the curve. As a result, it is now increasingly seen as a blocker to further innovation. The reasons for this are complicated, but they boil down to the fact that most automated tests are brittle. Even minor changes to your site can break all your tests and trigger a need for test maintenance. As your test coverage increases, so does this need for test maintenance. Ultimately, you can reach the point where your team spend all their time on test maintenance and have no time to create new tests.
The test maintenance issue
Let’s explore test maintenance in a little more detail. Test maintenance is a broad term for the process of keeping your test suite updated as your product evolves. There are three ways in which more UIs evolve:
- Style changes, where you update the look and feel of your site (typically with new CSS).
- Layout changes, where you move elements round on the page (e.g. you might move the login button).
- Functional changes, where the actual underlying code has changed.
The result is, every time your product team redesigns the UI, most of your tests will fail. And these failures are all spurious – that’s to say, they are all false positives rather than actual bugs. This means that most (if not all) your tests have to be re-recorded or re-scripted. Often, this means that a test team ends up spending half their time on test maintenance rather than actual testing.
Why is this an issue?
So, you might ask, why is this such an issue? Why is it that this becomes a blocker for increased test coverage? Well, your test automation team has a number of tasks they need to perform: Planning, test creation and debugging, test execution and analysis, and test maintenance. All these tasks take time and resources. And the last two tasks grow proportional to the number of tests you have. The more tests you have, the more effort it takes to analyze the results of each test run. Every failure has to be checked to see if it’s a real failure. And if it isn’t, you have to work out what caused it, update the test and rerun it.
In effect this means 100 tests take 100x longer to maintain than 1 test. And actually, it’s worse than that. The more complex your test suite becomes, the greater the number of dependencies. Added to that is the issue of indirect test failures. This is where your test fails several steps after an incorrect action was taken, such as adding the wrong product to a shopping cart. This all means you eventually run out of resources and have to stop adding new tests (or have to hire more test engineers).
How can I increase test coverage?
Clearly, the obvious solution is to reduce the amount of time needed for test analysis and test maintenance. But that can be easier said than done. If you stick with dumb or semi-intelligent test frameworks, you really can do very little about it. You can ask your developers to collaborate more with your testers to try and reduce issues with selector maintenance. You can try to get more efficient at analyzing test failures by working with the product team to predict what issues will come up following redesigns. Or you can use a proper intelligent test agent.
What is an intelligent test agent?
Put simply, an intelligent test agent acts as a perfect test automation engineer. Tirelessly working round the clock without complaint to reduce the time and effort needed to analyze and maintain your tests. The Functionize intelligent test agent combines multiple AI approaches along with proprietary techniques. At the heart of this is our Adaptive Event Analysis (AEA™) engine. This cuts test maintenance time by an order of magnitude.
When you create a Functionize test, the system starts the process of building a machine learning model for that test. This model learns exactly what the test is trying to achieve. It will record a huge range of parameters to construct the model. These include visual elements such as the size of the element, the location on the page, previous sizes and locations, visibility of elements, and visual configurations. It also includes more traditional selector elements such as Xpaths, CSS selectors, and parent-child elements. Importantly, it will be able to locate elements even if they are in child DOMs. As a result, whenever you update your style sheet or move an element on the page, our system is able to work out which is the correct selector.
Root cause analysis
Where the test failure happens several steps later, our root cause analysis system will find the likely problem. RCA uses a smart rule-based system that understands your tests. It looks for the most frequent triggers of failures, including the wrong choice of selectors and over-precise comparisons that fail when any data changes. The ML model is then able to work out what the correct action or comparison should have been. It can even go one step further and test all possible solutions to find the correct one. Finally, you are offered the chance to confirm the update with a single click.
How else can we help?
Functionize don’t just save time on test maintenance, we also make it far quicker and easier to create new tests. This is thanks to our innovative use of natural language processing. The result is that our Adaptive Language Processing (ALP™) engine is able to take tests written in plain English and use these to create your tests. We have defined a number of keywords to capture standard test actions. For instance, you can add a step “SCROLL to the bottom of the page and VERIFY the ‘next’ button shows”. The system also recognizes unstructured text such as “The sidebar should be 10% of the page width”. If the test step needs data you can pass that in and you can specify the expected outcome for each step.
The system converts the English test plan into a test by a process of modeling as explained above. This modeling is most efficient if you provide at least 50 tests at a time and currently is quicker at processing structured text than free text. Once the modeling is complete it gives you the option to replay and verify that each step is correct. Writing user journeys in English is not only far quicker, but it’s also easier. Now, anyone on your team can help increase test coverage by writing tests. In fact, because the ALP™ engine works best when you feed in large numbers of new tests to model, the more new tests you write, the better!
Can you give some real numbers?
We compared the time and effort needed to create and maintain 100 tests using Selenium versus Functionize. The results are pretty clear. In a 6-month project with 100 tests, you will need to invest ~550 man-hours plus 45 days for maintenance using traditional automation. By contrast, with Functionize this drops to 50 man-hours, with just 6.5 days of maintenance. Or turning it on its head. 1 test engineer can just about manage to create and maintain 100 tests using traditional automation. Whereas with Functionize, she will have the capacity to create and maintain almost 10x as many tests.
Everyone wants to increase test coverage and automate as much of their testing as possible. However, as we have seen, ever-increasing test maintenance means there is a limit to how many tests each test engineer can manage. In order to increase test coverage, you’re left with two options. Employ more (very expensive) test automation engineers. Or turn to Functionize. Our intelligent test agent reduces the need for maintenance, opens up test creation to every member of your team, and customers have reported a 6-fold increase in productivity.