Continuous testing with Functionize’s intelligent test agent
The problem with traditional testing
The traditional split between development and testing causes some real problems. It badly affects the software delivery process, making it slow and inefficient. Below, we look at some of the reasons why.
Bugs are harder to pinpoint
When you test all your new code at once it makes it really hard to identify the cause of a bug. The more code that has changed, the greater the number of potential culprits. Even worse, often new code triggers a bug that was always there in old code. All the test team can do to help is try to give the steps to reproduce the bug.
Bugs take longer to fix
Few UIs are expected to only run on one browser. As a result, cross-browser testing is an essential aspect of automated testing. The problem is, adapting a test script to work on a different browser always requires significant effort. It can even require completely rewriting the script. When you multiply this by the large number of browser/platform combinations, this reflects significant extra effort.
Tests need to be updated
Every new change and addition to your product needs new tests to be developed. If you make a lot of changes at once, you will end up rewriting almost all your tests before you can run them. This is not efficient and serves to increase the gap between development and testing.
It is an inefficient use of resources
In an efficient company, employees are never idle. But if you only test your code periodically, this means your test team is often idle. Alternatively, you may have to use external contractors to ensure you have sufficient resources to complete your testing. Both of these are inefficient and wasteful.
How continuous testing helps
The answer is to move to continuous testing. This means that each time a piece of code is pushed, you run a full set of tests (or at least a significant subset of your tests). Effectively, it is the missing piece of the CI/CD pipeline. Continuously integrate new code, continuously test your product and continually release the updates. Let’s look at how continuous testing helps.
1Bugs are found immediately. his means that it is much easier to pinpoint which code was at fault. It also reduces the risk that this fault is then hard-baked into your codebase. There’s nothing worse than finding that a function that’s called hundreds of times is actually broken.
2Bugs are much quicker to fix. Finding a bug as soon as the developer has pushed her code means she still knows the code inside out. As a result, she will be able to fix the bug much more quickly.
3Tests can be updated incrementally. Rather than try to update all your tests in one go, continuous testing allows you to update tests as the code is pushed. This means your test team is going to be more efficient.
4You are able to spread the workload. This allows you to ensure all your test team is gainfully employed. It reduces your dependence on external test contractors and Testing as a Service. It also helps your test team be more effective in their work.
Barriers to continuous testing
Continuous testing automatically implies automated testing. It can only work if all tests can be triggered automatically when new code is released. However, there are problems with test automation that serve to hinder continuous testing. The biggest of these is test maintenance. The issue is, test scripts are notoriously brittle. Even small changes in the UI or code can cause all your tests to fail. This means all those tests need to be debugged and fixed, instantly negating any benefit from continuous testing
How Functionize can help
Functionize can help you achieve continuous testing. We use artificial intelligence to simplify the whole process of automated testing, from writing new tests to analyzing the test results. Our system integrates with all the major CI/CD tools. We are also agnostic to your choice of test management tool. The end result is a test solution that is easy to integrate into your existing systems and which is easy for your staff to switch over to.
Test creation with NLP
NLP is our natural language processing engine that takes test plans and converts them into tests. Test plans are written in plain English, so anyone on your team can contribute (unlike Appium, which requires skilled developers). These plans can be unstructured text (like the user stories your product team uses). Or it can take structured test plans, such as those produced by test management systems. In both cases, it compares the test plan with your UI and works out what is meant to happen in the test. This means continuous testing can be done almost without human intervention.
ML Engine
ML Engine is the brains at the heart of our intelligent test agent. It uses artificial intelligence to learn how your UI really works. The test plans from NLP act as instructions that teach ML Engine about your application. It uses the test plans to build a complex model of your entire application. This model takes account of hundreds of data points for every element within the UI. 50 test plans can be modeled in just a day, and once complete, each test can be run on any browser without modification.
Self-healing tests
Analyzing the results of Functionize tests couldn’t be easier. For every test step, ML Engine captures before, during, and after screenshots, highlighting any unexpected result on the screen. This approach has two big advantages. First, anyone can verify that the test is doing what it is meant to. Second, anyone can instantly see any test failure and can then drill in to see more details (including the history of previous runs for that test).
Visual testing and failure analysis
Script-based tests break whenever element selectors change. For instance, when you restyle or move an element. Our tests are different. Elements are selected using machine learning and a complex set of descriptors that act like a fingerprint. Even when you change the UI, the element can still be identified. This means Functionize tests are self-healing. In turn, this means your continuous testing isn’t interrupted by the need for test maintenance.
Some changes are more fundamental and affect the functionality or introduce ambiguity to the test. Here, test failures may not show up until many steps later in the test. In these cases, our Root Cause Analysis engine will identify the most likely cause of the failure. It then uses the history of that test to come up with likely fixes which it tests. These are then ranked and presented co you can choose the best one. You can then select this with 1 click, and the change will be learned and propagated through all the tests.