Software testability – what it is and how to improve it

Testing is vital if you are to release bug-free software. But how easy is your software to test? Here we explore software testability and how to improve it

Testing is vital if you are to release bug-free software. But how easy is your software to test? Here we explore software testability and how to improve it

May 14, 2019
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Testing is vital if you are to release bug-free software. But how easy is your software to test? Here we explore software testability and how to improve it
Software testability – what it is and how to improve it

How to make your software more testable

Testing is vital if you are to release bug-free software. But how easy is your software to test? Here we explore the issue of software testability and give you advice on how to make your UI more testable.

Testing is a critical stage of the software development lifecycle. The aim is to release bug-free, performant software that won’t cost you a fortune in backend running costs. Clearly, making this process more efficient and effective will save you time and effort, and in the long run, will improve your profitability. This is one of the main drivers behind the switch to test automation. However, one important factor is often overlooked – software testability. In this blog, we will look at what software testability is and offer some tips and advice on how to improve the testability of your software.

Background

Software testing covers a wide range of activities, from unit tests for specific functions through to user-acceptance tests for finished products. At all stages of testing, you should be striving to achieve 100% test coverage, meaning every moving part in the code is being tested. 100% test coverage for unit tests is hard to achieve but definitely is achievable. However, achieving anything close to that for modern systems is extremely challenging (and potentially impossible). This is simply because of the complexity of these systems which can result in an effectively infinite number of paths through the application.

But this shouldn’t deter you from aiming to test 100% of the designed user journeys in your application. This sort of functional testing is vital if you are to avoid releasing buggy software. Modern UI testing tends to fall into two camps. Automated testing (e.g. Selenium) is used for some or all regression and smoke testing. The aim should be to test as many user journeys as possible within your application automatically. Manual testing is used in an explorative fashion to identify obscure bugs, try to find the steps to trigger known bugs and to test user journeys that are too complex to automate.

Testability

You might naively think all software is equally easy or hard to test. But you only have to look at how developers write unit tests to realize this isn’t so. Without suitable hooks for testing, many functions can only be tested implicitly by calling them from somewhere else and inspecting the results. Such functions are not very testable. But a well-designed function will include its own unit tests. These allow you to verify for certain that the function is working correctly. In effect, this is making the function testable.

Further up the testing hierarchy, things become less clear-cut. Here, testability is about two things. Firstly, can you actually test the software, and secondly, how likely is it that your tests will reveal any bugs. The following diagram shows how this relates to the testability of your application. 

Improving software testability

 

For functional UI testing, there are some real challenges for both manual and automated tests. Let’s look at these in more detail before then exploring ways you can make your overall application more testable.

Challenges for automated testing

Automated testing involves getting a computer to interact with your UI and replicate the actions of a real user. Things like selecting items on the screen, clicking on buttons, entering data in form fields, etc. The majority of test automation tools use some form of scripting to achieve this. These scripts first select an element in your UI, then perform some action on that element. (NB, in well-designed tests, this action may simply be verifying the correct element is in the correct place on screen).

Most test automation systems are based on a scripting language, such as JavaScript. JavaScript can select elements on the page in several ways. These include (in order of complexity) CSS selectors (e.g. Tag, ID, Class, attribute), DOM-specific (e.g. getElementById, getElementByName), and XPath. The problem is, with the possible exception of XPath, all these selectors can be ambiguous. This directly leads to the biggest bane of every test automation engineer’s life: test maintenance. Each time you make a change to your UI, it risks changing the selectors. Even simple CSS changes can have an effect. As a result, every change will break some or all your tests, requiring your test scripts to be rewritten. 

A related issue is the order in which selectors appear on the page. Scripting languages are relatively dumb. So, the first element that matches the selector will be the element it chooses. This can cause problems when your dev team decides to clean up their codebase. And again, this triggers the need for additional test maintenance and reduces software testability.

Challenges for manual testing

Manual testers have one key advantage over test automation engineers – they are human and therefore intelligent. This means that things, like restyling your site, moving elements on the page and even changing button names, shouldn’t worry them. However, they still face some real issues. For a start, they will generally be performing tests in a static location. Many sites and applications rely on geolocation information, which is hard to test. Another key problem is application state. A real-life user quickly builds up a complex application state. Being able to replicate this with manual tests can be time-consuming and hard. Repeating it test-after-test is even harder.

The test data problem

One problem is common to both manual and automatic testing – test data. If you are going to test your system properly, you need suitable test data. You might just use a copy of your real customer data. However, this has problems. If your system handles sensitive data (e.g. HIPAA or banking data), you can’t just allow anyone to have access to this data. Equally, if you have a new system you may not have any test data yet. In both these cases, you will end up having to create fake test data. That might sound easy enough, but it comes with a number of problems which we will explore later.

eBook
Top 10 tips for
modern web app testing

Improving testability

Below the system level in the testing hierarchy, improving software testability is largely about improving your code. This will involve things like adding explicit unit tests, utilizing tools that measure test coverage, code reviews, and the use of consistent code style. At the integration test stage, it involves understanding how each subsystem should function and may involve creating code to test for this. Where things get interesting is at the system testing stage.

Making your UI more testable

So, let’s look at what can you do to make your UI more testable. The following list is by no means exhaustive but shows you some of the ways you can improve matters.

Better and consistent element naming

Your developers can improve software testability if they simply make sure every element in the UI is correctly, predictably and uniquely named. This is a challenge in large projects where you may have big teams of frontend engineers. It is also particularly challenging when developing UIs for different platforms.

Adding tools for testers

Manual testing will be much simpler if you build in tools specifically for this. For instance, you might make it simple for the application to adjust its apparent location. You might also create tools that make it easy to place the application into a known state.

whitepaper

7 ways to increase your QA coverage

Test coverage | Functionize

Accurate test environment

Both manual and automatic testing will be more accurate if your test environment accurately reflects the production environment. Clearly, this can be challenging, but as a minimum, you need to ensure the backend is as similar functionally as possible. This means using the same versions of software, similar virtual server/container specifications and not artificially limiting things like data throughput.

Internal logging

Manual testing and, to a lesser extent, automatic, can be helped by ensuring your application includes accurate logging of internal state. This makes it easy to check what is going on during any test. Of course, the more logging you add, the worse your software performs so it may suffice to just make the backend API calls visible to your test team.

Consistent design

Consistent UI design is at the heart of good UX. But if you manage to get your design really consistent you improve your software testability. In turn, this makes life much easier for testers. Test modules will be easier to reuse and testers will be more likely to understand how the UI works.

Use of AI

One key step you can take to improve your software testability is to use AI. This can help in two ways. Firstly, it can help create better test data. Secondly, it can avoid many of the issues discussed above relating to test automation.

Better test data

Realistic test data is essential, but using real data or simply random data may not be practical. A good alternative is to use synthetic data. To create synthetic data, you take the real production data and use it to create a machine learning model. This model is then used to generate data that statistically match the real data but is completely anonymous. However. in practice, it is really hard to do this properly.

Intelligent test automation

Here at Functionize, we have developed an intelligent test agent for UI testing. This agent is able to take a test plan written in plain English and model how your UI is meant to work. This makes achieving 100% test coverage far easier and thus improves software testability. It also means that the system can cope with inconsistent design and naming. Effectively, our system creates tests that self-heal because they understand exactly what each element in your UI is actually doing.

Conclusions

Improving software testability will help improve your overall software development lifecycle. For test automation, Functionize’s intelligent test agent significantly improves testability. However, it still always pays to make sure you develop high-quality test data, and to test on a replica of your production environment.