AI Software Testing: Machine Learning

Machine Learning in AI software testing is a tool for extracting important features from fuzzy and fast-changing information.

Machine Learning in AI software testing is a tool for extracting important features from fuzzy and fast-changing information.

August 20, 2018
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Machine Learning in AI software testing is a tool for extracting important features from fuzzy and fast-changing information.

Part Two of a Three-Part Series

Machine Learning lets testing software extract important features from fuzzy and fast-changing information -- exploiting a basic understanding that “everything is data”.

Machine learning is at the leading edge of much of today’s most exciting research in AI, data mining, optimization, speech processing, and related fields. And it’s a cornerstone element of how AI is put to use by Functionize in software testing.  

Machine learning is a field of computer science concerned with building and using software that self-adjusts (learns) in response to input data (and, in some cases, to supervisory inputs), gradually becoming more capable of extracting significant features from further inputs, and using them to classify, cluster, rank, detect anomalies (or, conversely, identify and reject false positives), make predictions, or perform multiple tasks simultaneously, orchestrating these into complex behaviors (e.g., driving a car in traffic). 

We’ll offer some examples below, but for now, the most important thing is to understand that machine learning both enables and depends on a point-of-view shift (one of those IQ-increasing point-of-view shifts). In this case, the shift is between viewing and trying to process inputs in logical and semantic terms (the way interpreters, compilers, lexical analyzers, parsers, etc., do), and instead, processing them as samples, streams, pools, lakes, or other masses of raw data, at rest or in motion.

Everything is Data

Machine learning lets you (in fact, makes you) look at everything as data. And it turns out this can be a remarkably useful way of looking at all kinds of things involved in software development. Examples include:

  • The dynamically-changing contents of a browser’s document-object model (DOM). These days, with many web pages -- even not-terribly-interactive pages -- employing multiple frameworks (Bootstrap, Angular, jQuery, etc.), all hitting the DOM in different ways, it’s not necessarily efficient to try to logically model and abstract what all that javascript is doing on the way to figuring out if it’s doing what the developers want it to do.
  • Complex contents, rendering in a browser window. Is the best way to flag rendering issues by parsing the DOM, squeezing that information through a representation of the w3c ‘box model,’ and understanding it logically?

In both cases, actually, the answer is ‘yes’ -- Functionize does a lot of semantic and state-aware analysis of the DOM, render modeling, and other stuff. But for certain operations, it also engages machine learning, which adds huge efficiencies. In general terms, we use machine learning to analyze web-page and element renders as a time-series of changes to the DOM, correlating this with a machine-vision view of that process, where significant features are extracted and rendered as ‘filmstrip’ video.

Ai software testing: machine learning - visual element recognition screenshot

Reducing Test Maintenance

This makes it conceptually very fast and simple to flag emergent rendering issues (by comparing with time-series and features extracted from previous, known-good renders), reject irrelevant information (perhaps resulting from side-effects of how javascript utility packages and MVC frameworks do their business), quickly zero in on likely causes (Software change? Different browser/build? Performance issue with the production servers? Issue with the CDN?), and offer the ‘filmstrip,’ which is perfectly human-understandable, as a resource letting people further explore issues, or validate correct page/site behavior.