Intent: understanding what a test really does

Testing is just following a set of test steps and getting a result. So, creating test automation just requires writing those steps down in a script. Or at least, that’s what people sometimes think. In this blog, we explain why it is also vital to understand the intent behind each test step. 

Introduction

Intentions matter. As humans, we don’t just act without having some intention about the outcome, or at least, that’s usually the case! If you intend to make coffee, you first turn on the coffee machine. As a human, you understand that turning on the coffee machine indicates the intention to make coffee. This is as true of interactions with UIs as it is of everyday life. Indeed, good UX design is all about making it easy for a user to indicate their intentions. 

Intentions are really important when you are testing. In the old days, test plans were a set of steps for a human to follow when testing your product. Plans were written by humans for humans. The plan could ask the user to “login with the test account” and know that the human tester would understand what to do. More importantly, the human tester would understand why they needed to do it. In other words, they would understand the intent behind the test. 

For traditional test automation, with simple test scripts, the developer takes on the role of the human tester. At least in terms of understanding the intent of the test. The difference is that the script will just dumbly follow the instructions given to it. If a page takes too long to load, the script will still try to interact with it unless you add delays. If the button is swapped, the script will just click the wrong button, triggering a failure and the need for test maintenance.

What is intent and why does it matter?

Dictionary definitions for intent are often quite circular. Merriam Webster defines it as “a usually clearly formulated or planned intention”. Look up ‘intention’ and you find “what one intends to do or bring about”. So, for the sake of clarity, in this blog, we use intent as follows. 

Intent (noun): The aim behind a specific action or set of actions. 

This is easiest to understand if you think about creating new tests from scratch. Typically, you start with a user journey from the product team. This user journey takes the user from one state in the application to another, for instance, it might involve updating the user’s personal details. Here, it is pretty easy to know the intent. This user journey is then converted into a set of steps in a detailed test plan. Taken as a whole, this test plan still makes it clear what the intent is. However, once you hand that test plan to a Test Automation Engineer, the intent can start to get blurred. And a computer that runs the automated test has no idea what the intent is at all.

“So what?” I hear you ask

You may think that it is OK that the computer doesn’t understand what is going on. After all, a human is in charge of creating the test script and analyzing the results. The problem is, over time both tests and UIs evolve. New engineers join the team and have to update or maintain existing tests. Tests even become obsolete as new tests are created. You need a proper understanding of the intent of a test in order to know if a new test duplicates it. Likewise, you need to understand if a step becomes redundant due to a change in the user journey. 

Intent in test automation

Test automation typically involves getting a computer to select an element on the UI, interact with it in some way, and then check the result. In other words, a test is simply a set of actions the computer should perform. The computer doesn’t care what a button does. It just cares that the script asks it to click on it. Having interacted, the script then tells the computer to check the result. Typically, this just involves checking to see if specific (new) elements have appeared. It is left up to the human to actually understand the intent.

A simple example

Let’s take one of the simplest possible examples, namely the login user journey. Almost all websites have some form of login. For simplicity, we will look at the Facebook login, since a previous blog of ours explained how to create the Selenium script for this. For the human creating the script it is pretty obvious what the intent is:

from selenium import webdriver

from selenium.webdriver.common.keys import Keys

browser.get(‘https://www.facebook.com/’)

user = browser.find_elements_by_css_selector(“input[name=email]”)

user[0].send_keys(‘test@testing.com’)

pass = browser.find_elements_by_css_selector(“input[name=pass]”)

pass[0].send_keys(‘Pass12345’)

login = browser.find_elements_by_css_selector(“input[name=loginbutton]”)

loginn[0].click()

But what about the computer running the test? As far as the computer is concerned this test says nothing about intent. It just tells it to:

  • Load a web page.
  • Look for a field and enter some data.
  • Find another field and enter some data.
  • Locate a button and click it.

Obviously, this is a toy example. When a human comes to inspect this script they can still tell this is a login script. But what isn’t clear is whether this test is meant to pass or fail. Are the user details ones for a valid login or not? 

Why AI can make it harder

Here at Functionize, we have been developing an AI-powered test automation framework. Our aim is to simplify the process of test creation and test maintenance in order to boost your team’s productivity. At the heart of this lies our natural language processing engine, ALP™. This system takes test plans that are written in plain English and converts them into functional tests. The process uses a number of AI paradigms including NLP, machine learning, boosting, and computer vision. It combines these to create a detailed model of the UI based on the test plans provided.

Using this approach, the system is very good at learning the outcome of events. E.g. if it clicks the login button, it knows this should result in moving to a new page with a user logged in. However, it doesn’t really understand what the new page is. It just knows that it is what happens. This means that while the AI is learning it can become confused. To the AI any page that loads may seem to be a valid outcome unless it is taught that it’s wrong.

What happens if the intent isn’t obvious?

Understanding the intent of the test is integral to understanding what the outcome should be. This is something the AI simply can’t do on its own. Usually, if the test is well-structured, this isn’t a major problem. However, there are times when it goes wrong. Then it is necessary to manually adjust the test step to force the system to learn the correct outcome. This problem is exacerbated when companies use their own in-house terminology. This terminology is intimately familiar to the humans on the team, but to an AI it is just confusing.

How to add in the missing context?

Our engineers have been grappling with this issue over recent months. How can we add more context to test plans? How can we get around the issue of intent versus action? Our solution is to allow test plans to be augmented with contextual information. Once a test plan is loaded, you can step through it on screen. As you go, you can provide additional information using voice or text. For instance, you might provide the information

“Now login a valid user with the name “Joe Bloggs” and the pass “12345ABCDE”.

This information is captured alongside the test steps. It is parsed and related to the element selected, the action taken, and the associated data. This additional contextual information is able to be incorporated into the AI model being generated by ALP™. The upshot is, our system is now able to understand the intent behind the test. In turn, this removes the need to manually adjust test steps later.

What next?

We have already demonstrated this technology live when we attended Starwest a few weeks back. People that came to our stand were able to interact with tests and see how this works in practice. Now we are working on integrating it in our product suite. Keep your eyes on this space!

Sign Up Today

The Functionize platform is powered by our Adaptive Event Analysis™ technology which incorporates self-learning algorithms and machine learning in a cloud-based solution.