New Year's Resolutions: how to set realistic targets for test automation

January 6, 2021
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
New Year’s Resolutions should be achievable. But setting good targets is always hard. So how do you make sure your test automation targets are realistic?

The New Year is when many teams set their annual targets. But all too often, these targets are as unrealistic as our New Year’s Resolutions. Here, we look at three areas where we often see unrealistic targets in test automation. Then we explain how leveraging smart test automation makes even the toughest targets feasible.

Test coverage

Test coverage is simply a measure of how many of your tests are automated. It’s analogous to the concept of code coverage in software development.

Test coverage = number of automated tests/total number of tests x 100

So, a higher percentage must always be better, right? Sadly, this is a common misconception. See, the thing is, not all tests are suitable for test automation. For instance, tests that rely on human interactions. Or exploratory tests aimed at diagnosing a recently reported bug. Moreover, it is only worth automating tests if the effort spent doing so is less than the effort saved. If you will only run a test once a year, then it’s not worth automating it.

As a rule of thumb, if fewer than half of your tests are automated, then you probably should try to automate more of them. But if you already have 70-80%, you may not be gaining by automating more. So, how can you set a realistic target for this? Well, there are two other factors to consider. First, how long it will take to create each new test script. And second, how long it will take to convert each script to work cross-browser and cross-platform. Creating a robust test script that works cross-browser and cross-platform can easily take 2 days effort. So, a test automation engineer may struggle to create more than 100 good tests a year. That is quite apart from the time they spend on routine test maintenance.

Reducing routine maintenance

So, what is the issue with routine test maintenance? Well, in a nutshell, it’s wasted effort. Test maintenance is needed when a test fails because of some change in your UI or site code. In other words, test maintenance is due to a bug in your test, not your application. The problem is, test automation frameworks are often really brittle. Even small changes can have a massive impact on your tests. This results in engineers having to debug every test to find and correct the problems. That is effort they could use for creating more automated tests or updating old tests that may no longer be useful.

So, what is a good target for reducing routine maintenance? Well, the key here is to be realistic. Test maintenance is triggered by changes and updates to your site code. The more features you release and UI changes you make, the more test maintenance you will need. And since it’s not realistic to expect your company to release fewer features, this is a problem you need to tackle.  

The important thing is to understand why site changes trigger test maintenance. This is to do with something called selectors. These are how your test automation framework chooses the correct elements to interact with in the test. If you use good selectors, you will see less test maintenance. But, creating tests will be harder and slower. Ultimately, this is why the best Developers in Test command such high salaries.

Test effectiveness

The final metric of interest is test effectiveness. One problem with automated testing is that it is relatively simplistic. An automated test will only ever check the aspects of your site you tell it to. That means you need to explicitly tell it each thing you think is important. Realistically, this will mean large parts of your UI are untested—it’s simply not practical to check every detail. This is one area where manual testers will do better. A skilled human is likely to spot when there is a glitch in your UI, even if that isn’t part of the test they are currently conducting.

There is no doubt that you should aim to write more effective tests. But measuring this is hard, and achieving it is even harder. Certainly, you shouldn’t use any metric based on the number of bugs detected. You could measure the proportion of your test suite that is run for each release. However, many teams have test suites that contain large numbers or repetitive or outdated tests. In short, you need to be really careful with setting any targets relating to test effectiveness.

Realistic New Year’s Resolutions for Testing

Here at Functionize, we have developed a range of smart test automation tools. These are all based on applying machine learning to improve test automation. Let’s look at how this can help with the three metrics highlighted above.

Test creation

We offer three different ways to create tests. All of these result in smart automated tests that automatically work cross-platform and cross-browser. They are also designed to make it easy for even unskilled people to create robust tests.

  • Architect is like a test recorder on steroids. Creating a test is as simple as navigating through the site. The underlying system records millions of data points and learns the true intent behind the test.
  • NLP test creation allows you to upload a set of test cases written in plain English. It then uses natural language processing to convert these into smart automated tests. You can then run these tests from anywhere in the Functionize Test Cloud
  • Autonomous test creation uses real production data to learn from how your users interact with your application. It then creates tests to cover any gaps it identifies in your test coverage.

Overall, these tools make it much easier and quicker to automate your tests. In fact, in tests we find that teams can create tests 11x faster with our tools. So, aiming for 75% test coverage may be more realistic than you thought.

Test maintenance

Every time a Functionize test runs, our underlying system records millions of different data points. It uses these to constantly refine its model of your application. This model combines aspects of machine learning, computer vision, and natural language processing. The system knows the actual intent behind your tests. It doesn’t rely on traditional selectors to choose what element to interact with. Instead, it uses a complex fingerprint to make sure it selects the correct thing. If something changes, the system uses Dynamic Learning to update the test. As a result, it isn’t thrown by routine UI changes or even most changes to the underlying site code. This means routine test maintenance is effectively eliminated at a stroke!

Visual testing

The Functionize system records screenshots before, during, and after each and every test step. It also records a wealth of data relating to things like computed CSS values, element IDs, the relationship between elements, and whether elements regularly change or not. It uses all this data to check for unexpected changes in the UI between test runs. If it finds something, it will flag this and highlight the change on the screenshot. If this change was expected, you get a set of SmartFixes to choose from. These ensure the test is updated for the next time. But if it was an unexpected change, then you can report it as a bug to your developers.

The upshot is, Functionize tests will effectively check your whole UI every time they run. This makes them far more effective than traditional test scripts. Moreover, you benefit from tests that can work on any browser and platform with equal accuracy.

How can I learn more?

If you want to learn how Functionize can make your test automation New Year’s Resolutions more realistic, sign up for a free trial today.