Research shows that software testing accounts for 50% of the overall cost of software development. By definition, testing consumes enterprise resources and adds no functionality to the application; indeed the product’s functionality itself is under scrutiny.
If regression testing reveals a new error introduced by a code revision then a new cycle of revision begins. This factor highly motivates the engineering of automated testing agents which implement increasing artificial intelligence capacity, efficiency, and speed to reduce such costs.
Today most enterprise labs require engineers to write testing scripts, and their technical range of skills must be equal to the developers who code the original app under test. This additional overhead cost in the quality assurance process is congruent with the increasing complexity of software products; current methods can only be replaced by systems of increasing intelligence. Moreover, new apps increasingly contain machine learning and AI functionality which will be impossible for exclusively human testers to comprehensively evaluate. Logically, artificial intelligence will be increasingly required to test systems which themselves contain intelligence, in part because the array of input and output possibilities are bewildering.
The purpose of this article is to explore the recent use of AI methods in the field of automated software testing, and in particular regression testing, which is an ideal target for AI or autonomous testing. The concept of regression testing makes it an ideal target of AI and autonomous testing algorithms because it can make use of user assertion data gathered during previous test cycles. Regression testing itself potentially generates its own dataset for future deep learning applications by its very nature.
The most repetitive software testing occurs as regression testing, which has the object of verifying that previously tested modules continue to function predictably following code modification, and guarantee that no new bugs were introduced during the most recent cycle of enhancements to the app under test. In large measure, this procedure is composed of generating test input and monitoring output for anticipated results and failures. Current AI methods such as classification and clustering algorithms rely on just this type of primarily repetitive data to train models to forecast future outcomes accurately. First, a set of known inputs and verified outputs are used to set up features and train the model. A portion of the dataset with known inputs and outputs are reserved for testing the model. This set of known inputs are fed to the algorithm and the output is checked against the verified outputs in order to calculate the accuracy of the model. If the accuracy reaches a useful threshold then the model may be used in production.
In the case of automating software testing procedures, standard machine learning and more specifically deep learning methods like cognitive neural networks, support vector machines, reinforcement learning techniques, and Markov Decision Networks can be trained with data generated by cycles of user input and other user gestures, combined with the corresponding output of the app under test. For this purpose, the gestures of a tester doing regression testing are branches in a neural network, while the page elements are nodes. When the measured outcome is thereafter qualified in supervised training models, it can then be applied to predict outcomes for future regression testing cycles, as well as generate new test cases autonomously. The value of training deep learning models to forecast user input and system output is incremental and grows increasingly accurate as data accumulates in each test cycle.
Ideal automation testing would guarantee the elimination of errors from software by testing every possible input and user gesture. In reality, however current testing methods only show errors arising from limited test plans. Intelligent testing has the potential to expand test input generation to include virtually all possibilities but in a fraction of the time required by manually scripted test cases. Another limitation of current automated regression testing methods is a time dependency of error emergence: tested apps may perform accurately for months before a cumulative error or anomalous input reveals a serious bug. This lurking fault may lead to a severe financial loss. The comprehensive testing made possible by artificial intelligence offers greater confidence in the elimination of all functionality flaws, and increased assurance of a predictably excellent customer experience.
The emergence of machine learning-based automated regression testing demonstrates many fruitful new methods and approaches to adapting data collected during regression testing to the aggregation of big datasets for use in building systems which can self heal and autonomously create their own new test cases. Mutation analysis is one such method; it is a fault-based method which generates polymorphic versions of the SUT. Test sets are applied to each new program version to determine if a test plan can distinguish the new version from the original. Many experimental methods are on the horizon, but machine learning methods show the greatest promise for an application today.
Among the diverse techniques under exploration today, artificial neural networks show greatest potential for adapting big datasets to regression test plan design. Multi-layered neural networks are now trained with the software application under test, at first using test data which conform to the specification, but as cycles of testing continue, the accrued data expands the test potential. After a number of regression test cycles, the neural network becomes a living simulated model of the application under test. As new versions of the application evolve, so does the neural network simulation; thus as regression testing continues the neural net has increasing intelligence about the evolution, and therefore increased accuracy in forecasting all aspects of development: dynamic page elements, polymorphic inputs, user behavior and gestures, visual completion, and every output type ever captured by the neural net’s database. Ultimately, the artificially intelligent regression testing neural network becomes the perfect complementary emulation of the SUT, thus enabling a perfect view of every aspect of the app including performance metrics.
Essentially, neural networks of various types can be trained to simulate the application under test and the users interaction with the application simultaneously. In terms of integration of app and test suite, this is a developer’s panacea when it works, but another abyss of lost resources when it does not work. In general terms, the artificial neural network is accurate for approximating continuous deterministic functions. While the entire test plan can be executed and include every aspect of black box, functional, and performance testing, the data aggregated for training the model naturally flows from regression testing, because it is the repetition after code changes and with modified parameters yielding new input and output which provides the neural network with enough data to increase accuracy. This is how the testing system learns, self-heals, and eventually becomes autonomous.
While it may be a nebulous prospect to imagine how a program could learn to test your apps without additional scripting, it is as inevitable as speech recognition and natural language processing appeared to be a few years ago. Similar methods to biometrics and face recognition are now poised to transform automated testing the way relativity and the accelerometer changed the phone. Ideally, enterprise and QA team members responsible for the development pipeline should be prepared for an earthquake with the arrival of a new technology which is called Functionize. In practical terms, Functionize is revolutionizing the automated software testing world.