October 7, 2022

Machine learning in software testing

Machine learning is transforming how we tackle many day to day tasks. It allows us to create self-driving cars, powers the intelligent assistants we use in our homes, and helps ensure the cloud runs smoothly. Without machine learning, there would be no artificial intelligence. Machine learning also has real applications in software testing. Now, machine learning offers us a better way to do software testing.
Why test automation needs machine learning
Download Now



Software testing has always been an essential part of the software development lifecycle. Originally, it was a manual process that took significant time and effort. Then came test automation, allowing testing to become more efficient and fast. Now, artificial intelligence (AI) is transforming software testing in ways that could not have been dreamt of a decade ago. This includes simplifying test creation, reducing the need for test maintenance, and driving new ways to assess the results.

Overall, machine learning in software testing gives you three major benefits:

Streamlined releases: with automated testing

Faster and easier test creation

allowing you to significantly increase your test

Streamlined releases: with automated testing

Simpler test analysis

making use of techniques like computer vision to test more of the UI than traditional test scripts

Streamlined releases: with automated testing

Reduced test maintenance

allowing your team to focus on what really matters, ensuring the quality of your software

Taken together, this will transform the way your team does test automation.

Machine learning in software testing

The link between Artificial Intelligence and machine learning


You will often see the terms Artificial Intelligence and machine learning being used interchangeably. This makes sense to some extent. However, it is important to understand the subtle differences. AI is a very broad term that covers anything where a computer applies some form of intelligence to solve a problem. That means that it has solved some problem without explicitly being programmed to do so. Machine learning is one of the key techniques we use to achieve this. It forms the basis for many AI systems, but not all.

Put simply, machine learning happens where a computer learns to do tasks by itself. In machine learning, a computer learns to recognize certain patterns. It then uses this pattern recognition to trigger appropriate actions. Machine learning comes in three forms:

Streamlined releases: with automated testing

Supervised learning

is similar to how we teach a child to read. You show the computer lots of examples of the thing you want them to learn and it works out how to recognize these. This means you need a large volume of labeled data for the computer to learn from.

Streamlined releases: with automated testing

Unsupervised learning

is a bit different. It’s more like how we learn our way around a new town. We start off not really knowing where any of the stores or amenities are. But over time we learn how different locations relate to each other. In the same way, a computer can look at a set of unlabeled data and identify patterns and links in the data.

Streamlined releases: with automated testing

Reinforcement learning

is most like learning to walk. A baby starts off unable to support themselves. After a while they learn how to crawl. Then they take a few steps, then learn to walk and finally, they learn to run. Over time, they are learning how to balance and move. Get it wrong and they fall over. Get it right and they are praised by their parents and feel a sense of achievement. In a similar way, in reinforcement learning the computer is given some form of “reward” when it makes a good decision. Over time, it gets better and better at making the right choice.

ML and AI in software testing - supervised, unsupervised, reinforced learning

Other types of artificial intelligence

There are several other forms of AI. Here are a few you may have heard of.

Computer vision

Computer vision involves teaching a computer to interpret still and moving images. This is quite complex and typically involves several stages. First, the computer has to work out which bits of the picture are related to each other. Identifying distinct objects like this is called image segmentation. Next, it tries to identify what each object actually is. Finally, the computer has to work out how all the different objects relate to one another. This is a key technique for self-driving vehicles, which need to look at the road ahead and spot any risks or obstacles.

Natural language processing

Natural language processing (NLP) means teaching computers to understand human languages. This is one of the central technologies behind Amazon Alexa and Apple’s Siri. There are several challenges with NLP. Firstly, human languages are really complex. Secondly, there are many layers of meaning which are often contextual. Thirdly, there are several ways to say each thing. For example, the following 3 sentences all mean the same. “Your dinner is ready.” “Supper is on the table.” “Come and eat!” NLP breaks sentences down into grammatical parts and sees how these relate to each other. It then compares this with its knowledge of grammar to parse the meaning in the sentence.

Deep learning

The ultimate form of machine learning is called deep learning. This uses large artificial brains called deep neural networks to solve problems without human intervention. A great example is Google’s DeepMind, which taught itself to play the game of Go and is now virtually unbeatable. It did so by repeatedly playing games against itself, learning from its mistakes and steadily improving.

Test creation with NLP

How AI is applied to software testing

Software testing has often languished behind software development. For ages, it was the poor man of the software development process. But AI is allowing it to catch up and even take a lead over the rest of the SDLC. There are three key areas machine learning in software testing helps.


Test creation

Automated testing requires you to create and run tests. Traditionally, this meant using a framework such as Selenium. Selenium replicates the actions of a real user interacting with the UI. It does this using a combination of element selectors and actions. Selectors allow it to identify the correct element on the screen. Then it can perform actions such as clicking, hovering, entering text, or just verifying that an element exists. You control all this by creating a detailed test script.

The problem with test scripts

Test scripts are remarkably hard to get right. Each script is a mini software development project. Creating a new script is iterative and slow, requiring frequent rounds of testing and debugging. Even a simple script can take hours to create. When you add in the requirement for cross-browser testing this grows to days. This is one reason why few companies manage to automate more than a fraction of their regression tests.

How AI improves the process

AI allows you to create tests by simply clicking your way through your test case step by step. Creating tests like this takes minutes rather than hours, since you’re essentially performing the test manually while the underlying system automates it. In the background, the system is building ML models of how your site works, recording huge amounts of data as it goes. It can learn what button is being pressed and why, using techniques like NLP to understand what the button does. Creating a model is important—it means the test will work on any browser or device. It will even cope with dynamic content and 3rd party widgets, such as PayPal “Buy now” buttons or HubSpot forms.

Test creation with ML - no more scripting

Test analysis

Test scripts are generally simple pass-fail affairs. The test works by checking whether the outcome of an action is what you expected. You have to tell it precisely what to look for in order for this to work. Superficially, this makes test analysis really easy—either the test passed or it failed. But the reality is much more complex.

Issues with interpreting test results

Ideally, your test should always be reliable. A test should only fail if there’s a bug or defect. Conversely, a test should only pass if everything is working as expected. However, there’s two problems with traditional test automation.

Streamlined releases: with automated testing

It only tests things that you explicitly tell it to.

This leaves large amounts of your UI untested, unless you spend significant effort creating your tests. As a result, you may find there’s a glaring mistake that none of your tests are picking up.

Streamlined releases: with automated testing

Many test failures are false positives

That means they aren’t really failures at all. This is because of how test scripts select elements on the page. These selectors are brittle and can change every time your site is updated or redesigned. 

Taken together, these mean you can’t be certain whether your tests really passed or actually failed.

How AI improves test analysis

AI can improve test analysis in three key ways. Firstly, you can leverage computer vision and machine learning to increase the amount of the UI you test. This visual testing approach allows you to compare how your site looks now against previous test runs. If anything unexpectedly changes, it can be flagged as a potential failure. Secondly, tests are more reliable. This is because false positives are far less likely because tests are robust to changes in the UI. Thirdly, the ML model constantly learns how your site should perform. For instance, it learns how long each page load should take. It uses this to create an intelligent wait before the first interaction with the page. So, if a page suddenly takes longer to load, it knows to report this as a test fail.

Test analysis

Test maintenance

Test maintenance is the bane of every test engineer’s life. Every time the UI changes or the site logic is updated all the tests break. This means you waste hours of time fixing and debugging your tests. This is euphemistically called test maintenance. But that hides the fact that it shouldn’t be necessary in the first place.

Why test maintenance exists

There’s one simple reason why your test scripts require maintenance—selectors. As we said above, selectors are used to tell Selenium which elements to interact with. The problem is, these selectors tend to change whenever your site changes. This happens regardless of how carefully you choose your selectors in the first place. And if the selector changes, either your script fails immediately, or it proceeds with the wrong element. Either way, the result is you get a test failure that needn’t have happened.

How AI removes the need for routine maintenance

AI can slash the need for test maintenance. This is because it relies on a complete machine learning model of your site. If an element changes or moves, it just works out what happened and still selects the correct one. This is known as Self Healing. For example, Functionize tests can cope if you move your “sign up” button to a different place on the page and change it to “register”. AI also allows you to implement intelligent test editing, for instance updating a test step directly from a screenshot by making use of all the ML data collected each test run. We call this feature Smart Screenshots and it transforms test editing and debugging.

Of course, sometimes a change may be more significant. Maybe your developers got rid of a button completely. Or changed every item in a drop-down menu. Not a problem if you are using an AI powered software testing platform. It will realise there’s a problem and come up with suggestions of what might have changed. These SmartFix suggestions are based on data gathered across millions of different tests. You simply need to click on the correct one and your test will be updated.

test maintenance with ML

Trends in AI test automation

Machine learning in software testing is still quite new. But it is advancing as rapidly as every other area of artificial intelligence. Here are a few trends you should look out for in the near future.


Customer-journey driven testing

First, testing just happened immediately before release. Then it shifted left to enable earlier and more frequent testing. More recently, testing shifted right, with techniques like Canary Testing and Dark Launching. These allow you to test whether new features affect the stability of your backend, but they don’t allow you to test the usability of your app. That is about to change thanks to AI-powered in-production testing.

Currently, everyone relies on passive monitoring to test systems in production. The focus is entirely on performance and responsiveness. Very little attention is paid to the user’s actual experience. In effect, you are assuming that the pre-release testing has solved any problems.

And there’s definitely no attempt to check if 3rd party content is working correctly! With AI-powered testing, you can run any of your tests against your live production system and get live insights into how your system is performing.

customer-journey driven testing

Gap Analysis

You probably heard people saying that a modern calculator has more computing power than the Apollo moon mission. And a modern laptop outperforms CRAY supercomputers from just a couple of decades ago. This increase in power has seen a similar increase in complexity for applications. Just compare the original arcade game, Pong, with something like Candy Crush Saga. The upshot is, testing every part of these applications is nearly impossible.

You can test the more obvious user flows, but for certain there will be flows you miss. This is OK if those flows aren’t being used, but what if all your users are doing something unexpected and triggering a bug? Fortunately, machine learning is soon going to give us the ability to actively monitor how users interact with your application. If the system sees lots of people taking an untested user journey, it can alert you that there’s a gap in your testing.

Gap analysis - get real insights from user flows

Autonomous testing

We have seen several ways in which machine learning is transforming software testing. But all the things we showed you so far require some sort of human intervention. But now we see a trend towards autonomous testing.

According to a Gartner report, that means:

“... machines are able to evaluate an application, decide what, when, where and how testing should be performed, and summarize the results to provide a release/no-release decision based on test results.”

For a paid solution, why not check out what Functionize can offer you. Just as with UI tests, we make it really easy to create and run API tests. One of the most powerful features we offer is the ability to store API responses in variables that you can then call in your tests. This is invaluable when, for instance, you need to test with an API key that needs to change each time.

Explore Product