AI testing with Functionize’s intelligent test agent
The problem with script-based testing
Selenium was revolutionary back when it was created almost 2 decades ago. But it has always had a number of key flaws. These have limited its utility and prevent most companies from being able to get the full benefits of automated testing. Let’s look at a few of these, and then show how AI testing solved the issues.
Creating a set of test scripts for a UI is effectively a complete software development project in its own right. Each test script has to be written step-by-step, tested and debugged. And this process is entirely manual. Sometimes, it can even be impossible to test some functions, especially if your UI imports 3rd party content. As a result, test scripting is extremely slow and inefficient. Even under the best conditions, it can take days to create a single script.
Few UIs are expected to only run on one browser. As a result, cross-browser testing is an essential aspect of automated testing. The problem is, adapting a test script to work on a different browser always requires significant effort. It can even require completely rewriting the script. When you multiply this by the large number of browser/platform combinations, this reflects significant extra effort.
Test scripts interact use selectors to locate elements in the UI. They then perform actions like clicking buttons, toggling switches or entering text. Unfortunately, these selectors are very brittle. Even simple CSS changes can alter every selector on the page. And where there are multiple possible elements, Selenium will just select the first one. The result is, every change to your UI will break most of your tests. These then have to be fixed, meaning test maintenance can absorb half a test engineer’s time.
When a test script fails, the test engineer has to work out what triggered the issue. They then have to check whether this was a real failure or just a problem with the test script. The issue comes when you have tests that only fail many steps after there was an issue. For instance, if the wrong product was added to a shopping cart. The test may proceed perfectly until it comes to checking out the cart. Tracking down such failures takes ages, even for an experienced test engineer.
How can AI testing solve the problem?
AI testing leverages artificial intelligence to solve challenges like the ones listed above. Let’s look at each problem in turn and see how AI testing might help.
1Test Creation. The problem with test scripts is that they have to be written by skilled test engineers. The engineer takes a detailed test plan and has to painstakingly translate it into a working script. This means finding the best selector for each element that needs to be interacted with and determining just what elements need to be verified at each step. But nowadays computers can be taught how to understand complex natural language. NLP (natural language processing) has advanced leaps and bounds in the last 5 years. Now it is advanced enough to interpret test plans and even user stories. If you couple this with other forms of AI, you can do away with test scripts completely.
2Cross-browser testing. The main reason cross-browser test scripts are challenging is that each browser renders the UI slightly differently and each browser needs a different version of Selenium WebDriver. This means each selector has to be checked and updated for every browser. AI testing allows you to use machine learning to avoid this. ML can build a model of what the UI is really doing, so instead of using a single selector for each element, you can create a fingerprint. This means the element can now be located in every browser.
3Test maintenance. In theory, AI testing can eliminate test maintenance by combining machine learning with NLP and computer vision. Test maintenance is usually required following one of 3 things: a UI layout change, a CSS update or a change in the text on a page element. For instance, the login button may move from the top left to the top right, be renamed “sign in” and recolored. AI testing is able to cope with this. Firstly, the machine learning model knows that the button is still calling the login API. Secondly, NLP understands that “sign in” and “login” are synonymous. Thirdly, the system is able to recognize that the button is still next to the “register” button and knows this makes it likely to be the same button.
4Failure analysis. When you move to AI testing, you are able to leverage a whole range of AI approaches. Computer vision is particularly powerful for analyzing test failures. A manually created test script never checks every single element on the screen. To do so would be impractical. But computer vision allows you to compare each screen with what was expected and identify any changes. When you add real intelligence, it can even learn to ignore things that are known to change, such as the date. AI testing also implies that your test system understands how your UI works. It does this by using machine learning to constantly improve its knowledge. As a result, when things change, it is able to recognize the change the same way a human tester does, so it knows not to flag this as a failure.
How Functionize AI testing helps
Functionize uses artificial intelligence to simplify the whole process of automated testing, from writing new tests to analyzing the test results. AI testing like this can transform your QA process and dramatically reduce your time to market.
Test creation with Natural Language Processing
NLP is our natural language processing engine that takes test plans and converts them into tests. Test plans are written in plain English, so everyone on your team can contribute. These plans can be unstructured text, like the user stories your product team uses. Or it can use structured test plans, such as those produced by test management systems. In both cases, it compares the test plan with your UI and works out what is meant to happen in the test.
ML Engine is the brains of our intelligent test agent. It machine learning, computer vision and NLP to learn how your UI really works. Your test plans act as a set of instructions that teach ML Engine about your application. It uses the test plans to build a complex model of your entire application which will then work for any browser. It is even able to deal with embedded content from 3rd parties (something that is almost impossible for Selenium). Importantly, each time you run a test, the model uses ML techniques such as reinforcement learning to become stronger.
Script-based tests break whenever you update your UI. But in our system, elements are selected using a complex fingerprint that looks at multiple features. Even if you change the UI, the element can still be identified. This makes Functionize tests self-healing. In turn, this means your testing isn’t interrupted by the need for test maintenance.
Visual testing and failure analysis
Analyzing the results of Functionize tests couldn’t be easier. For every test step, ML Engine captures before, during, and after screenshots, highlighting any unexpected result on the screen. This approach has two big advantages. First, anyone can verify that the test is doing what it is meant to. Second, anyone can instantly see any test failure and can then drill in to see more details (including the history of previous runs for that test).
Our Root Cause Analysis engine will identify the most likely cause of a failure, even if it only shows up many steps later. It then uses the history of that test to come up with likely fixes which it tests. These are then ranked and presented co you can choose the best one. You can then select this with 1 click, and the change will be learned and propagated through all your tests.
In conclusion, Functionize’s intelligent test agent has made AI testing into a reality, not a dream. Automated testing has been brought up to date to cope with the demands of modern, responsive and dynamic UIs and web applications.