No Scripts AI Software Cloud Based Testing

A closer look at Functionize's noscripts, cloud based software testing platform using artificial intelligence, NLP ML, and machine vision.

A closer look at Functionize's noscripts, cloud based software testing platform using artificial intelligence, NLP ML, and machine vision.

August 25, 2018
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
A closer look at Functionize's noscripts, cloud based software testing platform using artificial intelligence, NLP ML, and machine vision.

Part Three of a Three-Part Series

AI - Natural Language Processing, Machine Learning, and Machine Vision - enable Functionize to robustly ingest, model, analyze, and reliably emulate human interactions with web applications. Cloud technology helps deliver this intelligence responsively and economically, at scale. 

The combination of very high-quality, mature, lexical analysis of live web pages and sophisticated machine learning is fundamental to the success of Functionize’s ‘no script’ approach to test creation. The Functionize platform ingests your written test plan or user journey using our natural language processing engine and outputs tests that are ready to be executed across multiple browser platforms.  Functionize can employ the tests—that have been built automatically with NLP—in visual testing, cross-browser testing, mobile testing, and performance/load testing. Resultant test suites are highly durable because we apply machine learning and perception to model the meaning -- not just the details -- of human behaviors. 

Plain English Test Creation

 

Our ALP engine is able to reproduce complex test steps, of any length, and avoid the pitfalls of most recorders, that often struggle to accurately capture complex workflows. Though a recorder might be “intelligent”, it may not record as you expect, resulting in tests that need to be frequently edited. It can be challenging to edit the playback sequence and decide which actions can be removed. Smart recorders also record the wrong things too often.  

After our ALP engine outputs a test suite that is ready for execution, the magic continues. When the render of that web page subsequently changes, Functionize can clearly report the issue--or in some cases, even adapt dynamically and self-heal. Over many thousands of potential changes per week on some large and complex sites, this kind of autonomous cleverness saves real time and money.

Everything (in the Cloud) is (Big) Data

In general, the more data you can access -- the bigger the ‘training sets’ get -- the better and more robust machine learning and data mining can become. This means the cloud is important to AI, since it’s where vast lakes of data live, large numbers of users connect, and huge compute/storage/network resources can be brought to bear: scaling up to solve big problems in elegant, economical ways.

Functionize uses the Test Cloud to deliver AI-mediated software testing services at scale. Adaptive Event AnalysisTM enables Functionize’s platform to strengthen your test with every subsequent run, so your tests become more resilient over time. The AEA engine is a sophisticated, data-driven, semi-autonomous client capable of emulating human users: perceiving web pages in ways analogous to how humans perceive them, responding conditionally to stimuli, executing test scenarios reliably, repeatedly, and preprocessing test results for efficient reporting, upstream. In addition to machine-learning-based page analysis and machine vision, Functionize can independently, intelligently deal with variables -- for example, timeouts, rendering speed variations, failures of UI elements to become active in different browser contexts, etc.-- that would confound simpler ‘test script execution agents,’ creating specious errors and consuming human operator and engineer time. 

‘AI smart,’ of course, also has its limits. Rather than actually simulating people, Functionize offers a ‘Real User QA’ feature that can model real user interactions with web pages, creating an enormous, crowdsourced library of end-user ground-truth. Information about what users really do can be crunched, analyzed, and replayed (across the full range of available browser/OS/client-device combinations supported), enabling rapid, effectual QA, focused unerringly on what your users and customers are doing.

The Future of AI in Software Testing

What does all this mean to you, whether you ’re a QA engineer, a software tester, a developer, or perhaps a business leader looking to improve your organization’s efficiency, agility, and software quality? Here are some takeaways and trends to consider:

The point of AI in testing is not to replace people. Admittedly, AI-enhanced, automated, cloud-based testing -- perhaps amplified with Real User QA -- is probably now capable of supplanting a lot of old-school manual testing. But you already know this kind of testing is fallible, error-prone, dependent on shared languages, shared understanding, and high levels of attention. This is hard work for humans to do. And expensive. So expensive that you’ve probably outsourced it already. The fact that machines can now do it, and do it better than people, is maybe not such a tragedy. 

AI in software testing increases human insight and bandwidth. The roles of Developer, Operations/IT specialist, and QA/Tester are already starting to fuse. Software development methodologies and techniques (e.g., CI/CD automation, infrastructure-as-code, containers, and microservices, etc.) are already empowering a cultural and process shift towards more efficient, ‘cloud-native’ application architectures and practices. Given this accelerating trend, the goal is to make all players more effective by granting greater insight, eliminating needless complexity, and reducing the time required to create, maintain, execute, interpret, and action software test results. 

As Alan Kay says: “Point of View = 80 IQ points.”