AI Software Testing: Making QA ‘Smarter’

The first part in a blog series to promote greater clarity about what’s possible right now with AI in software testing. Check out the article for more!

The first part in a blog series to promote greater clarity about what’s possible right now with AI in software testing. Check out the article for more!

August 13, 2018
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
The first part in a blog series to promote greater clarity about what’s possible right now with AI in software testing. Check out the article for more!

Part One of a Three-Part Series

No sci-fi required -- Artificial Intelligence -- Machine Learning, Machine Vision, and related technologies -- can dramatically improve your QA and software testing _today_. (But here’s the catch: reality is both more powerful and way less sexy, than you might think.) 

Recent blogs and articles -- some causing buzz in the automated testing community -- do a good job of conveying some of the excitement that we at Functionize, along with other researchers and developers in the field, feel about the importance of AI in software testing and quality assurance. Unfortunately, however, many of these articles focus on higher-order, still largely theoretical  AI -- thus fail to identify near-term opportunities and low-hanging fruit. In so doing, they actually understate AI’s practical value in making better software today. 

I’ve written this blog series in hopes of promoting greater clarity about what’s possible right now with AI in software testing. I want to outline key concepts, name and define the several AI disciplines involved, show what kinds of problems lie in the province of AI to solve (and which are actually better solved by other means), and lay the groundwork for a more practical, results-oriented, hype-free conversation about what AI can do for QA/test right now, and in what direction  we at Functionize, and in the broader software testing community, should take it next.

AI: Making Computers (Seem) Smart

AI, or Artificial Intelligence, is theory, practice, tools, and techniques that let software act “smart.” Purists will argue that real AI is limited to code emulating a learning or reasoning process. But I’d argue for a broader definition that includes code that seems smart, as well: applications that are especially responsive, intuitive, easy to use, or unexpectedly helpful for users.

Under this less-constrained definition, a lot of really good (but definitely non-magical) software best-practices--from responsive UI to website personalization to helpful daemon services on your mobile phone (e.g., the ones that skim your inbound email for airline itineraries and pop up reminders of your flight times)--qualify as (low-order) AI. 

Building on this idea, the first place ‘AI’ can help in software testing is when makers of test platforms and frameworks do a good, insightful job of tool-building. When they create tools that are powerful, but not overwhelming--tools that require little training to use effectively. Tools can be used by many different kinds of people, including people who aren’t trained in the formal arcana of QA engineering or who perhaps lack any training in software development. 

AI pioneer Alan Kay, in a brilliant answer, to a question posted on Quora about an earlier statement he famously made that “LISP is the greatest single programming language ever designed,” reminded readers that certain very significant engineering tools -- tools like calculus that embody deep insights about the universe -- effectively raise the IQ of everyone who subsequently uses that tool. Kay summarizes this notion succinctly in the phrase “POV (Point of View) = 80 IQ points.” 

Absolutely -- constrained-definition, for-real-smart AI comprises one set of disciplines that can be brought to bear in making test frameworks IQ-expanding in this way. But ‘real’ AI isn’t the only tool that matters when building better testing tools. ‘Fake’ AI (read: good usability design) matters too! Ultimately, the goal is not to make testing frameworks smarter (because, really, who cares?), but to make tools that help human beings be smarter, so they can test better and faster, and make better software. 

This is central to our mission at Functionize, and we’re doing it well enough that, in many cases, customers with zero experience of software testing, and zero engineering background, can (with a little training) create, maintain, and implement software testing programs. 

In my next blog, I’ll explain how Machine Learning can be harnessed to create QA software that adapts “intelligently” to the changing state of software under test.