Test creation is evolving. As legacy challenges persist—and new challenges arise—test creation must become easier and more responsive. Many QA experts agree that the changes presently underway in the software industry-at-large are considerably more volatile than any previous evolution.
Except for small, experimental software projects, test creation must evolve because modern application development continues to press hard forward with a steady increase in complexity, adoption of new technology, and the need for compatibility with an ever-growing number of environments and devices. Performance is also another major factor, since users have built up expectations to have sub-second response times, and many users will abandon a faulty application if it fails only a few times.
Software testing is now an engineering concern. Many QA cultures see the need to advance well beyond simple validation and verification toward test automation. But conventional automation requires considerable technical skill and specialization in several tools. In many teams, this effectively limits testing to a few individuals. On any software team, the efforts of development and engineering staff must be well spent. Their time should be spent on product development. A team member that has the skills to script in Selenium has obvious headaches and could better leverage their time with solutions like FZE.
From the perspective of DevOps, QA is often seen as nebulous. This is largely because test engineers spend much of their time in tedious script creation and manual updates to existing scripts—using a mix of tools such as Cucumber and Serenity. Coding and debugging in Selenium is interminably slow and painful. The process is not readily understood either by developers or DevOps staff. Moreover, while some teams manage to employ scripts for effective regression testing, it often comes at a relatively high cost.
Testing isn’t an exciting topic in most CI/CD discussions, and this is because the primary focus of DevOps is to build and maintain the integration-delivery pipeline. There’s no point in being resentful over the reality—in many cases—that DevOps simply neglects the roles of QA and testing. Like many roles outside QA, many DevOps engineers continue to maintain the assumption that exploratory, ad-hoc, and manual testing is quite sufficient. But this doesn’t begin to square up with the complexity, adaptability, performance expectations that come with an investment in a continuous delivery pipeline.
It is becoming critically important—in this era of increasing software and systems complexity—to reduce manual testing effort and automate as much as possible. If your team is developing complex software and looking to ramp up on CI/CD, then it’s quite impracticable to move forward without implementing test automation to a significant extent.
The way forward
From the present time forward, the best teams will manage most of their software QA efforts with autonomous testing. This will be done most efficiently with intelligent testers that implement genuinely and comprehensively intelligent tools which go beyond conventional scripting and recording. Today, Functionize is positioning its natural language processing (NLP) engine at the very heart of a highly intelligent testing solution. Continue reading to see why conventional types of test creation and management will eventually give way to NLP.
Manual testing is best for exploration
Manual and exploratory tests depend fully on a human tester to navigate through a testing path. Though an experienced tester will not proceed in free-form, the test will only proceed at the speed of the human manual execution—which can often be hampered by interruptions. When comparing with automated tests, another potential disadvantage is that manual testing typically lacks specificity and repeatability. When testing new features, one key advantage is that the human tester can think like an end user. Of course, this is different from a script, which will follow only one path.
To many—especially to developers, DevOps staff, and managers—the most frustrating aspect of manual/exploratory testing is the extensive amount of cost and effort that is necessary. As software grows and increases in complexity, unscripted testing seems to be, well, nonsense. It can be wide-ranging and variable, though it is rarely ad hoc. On the other hand, if you ask senior automation engineers and QA professionals about the value of such testing, you’re likely to find that scripted testing versus exploratory testing is a perpetual debate.
Scripting tools are popular but tedious to manage
A test script follows a definitive, repeatable path that has been laid out by a human—usually a tester. The script includes documented test cases, each with steps and pass/fail criteria. There is never any variation with a conventional testing script.
Selenium is a widely popular test scripting platform. But conventional script-based approaches to testing require frequent updates to the script libraries to keep pace with high momentum, highly dynamic integrations, and delivery processes. Many teams have to grapple with a large number of automation false positives—which require additional time-consuming and tedious maintenance. This is a major reason that some teams entirely abandon their automation initiatives.
Many Selenium tests simply take too long to execute—which makes it impracticable to run a full, complete and meaningful regression suite on each build. In such environments, there’s no chance for immediate feedback on how recent changes impact user experience. This directly undermines the goals and objectives of CI/CD.
Non-intelligent recorders are difficult to maintain
As applications increase in complexity, it becomes necessary to automate as much testing as possible. Another possibility is to go beyond basic scripting and simple recorders up to the intelligent recorders. Up until recently, this has been state-of-the-art in testing automation.
Many development teams have had to pull back and fix or replace their existing automation frameworks. And many of those failures or resets have something in common: they’ve tried to employ non-technical or slightly technical resources to automate. This is to avoid lengthy ramp-up and keep costs down. Typically, the first foray into automation is to experiment with record and playback tools.
At first, record and playback seem to be very beneficial, since it enables your testers to automate without the need to learn how to code. The tools are fairly easy to use and provide immediate results. But, there are some non-trivial disadvantages.
Perhaps the most intense pain point here is the high maintenance cost. Most of these tools store procedures within a script or other container. Not only do the testers have to acquire new skills if they want to modify these procedures, but even minor changes are likely to induce more changes—in other, related tests. Otherwise, it might be best to record again. This is all quite tedious and time-consuming, and it significantly diminishes the purpose of the automation effort.
Another problem is the limitations on test coverage. Like a scripting tool, an unintelligent record and playback tool only follows the exact steps that were recorded. Nothing more, nothing less. Effectively, most such tools provide only a little more value than scripting. The additional value here comes in the form of basic navigation testing—against the UI only. While this is important, navigation testing is not high-value automation.
Smart recorders are better, but not the best
Some record and playback tools are more intelligent. These tools promise too much and appear to be simple. If you follow some basic instructions, you can probably get some meaningful test results within a few hours. Some of these tools make it easy for those outside of QA to record tests and achieve results. After a short time, however, your testing team is probably going to run headfirst into a wall.
There is one very sticky wicket to contend with on some “smart” recorders. When you begin recording test cases, you’ll realize that it could be really efficient to reuse specific action sequences on other test cases. However, it can be a daunting technical task to extract those sequences—especially if the recorder is generating code scripts underneath.
Another major problem is that the recorder you choose is likely to capture a huge number of actions. Though a recorder might be “intelligent”, it may not record as you expect. It can be challenging to edit the playback sequence and decide which actions can be removed. Smart recorders also record the wrong things too often. All recorders have to determine what you mean when you perform a specific action, but a number of recorders get it wrong way too much. You’ll know something is wrong, but you don’t know how the recorder internals work. Since you don’t know how it works, you can’t fix it.
In stark contrast to that blindness—as we’ll discuss in the next section—Functionize has built an NLP engine to significantly reduce test creation time and make it much more logical and intuitive. Then, we process that input with our truly intelligent AI/ML to create tests and enable automatic, seamless test maintenance.
Highly autonomous testing with Adaptive Language Processing (ALP)
The time is ripe for a test automation solution that makes it easy to compile, edit, manage, and automate all of your test cases. Functionize has truly made it possible, now that it is adding Adaptive Language Processing (ALP) engine to its comprehensive testing solution. Functionize’s NLP makes it much easier to create test cases and modify each one in the future—in a format that is intuitively easy for virtually anyone to understand.
The Functionize ALP engine ingests a pre-formatted document that contains test case plans that have been written as simply and naturally as you might have them already in a Microsoft Word or Excel document. You could write, for example:
“Verify that all currency amounts display with a currency symbol.”
Anyone on the entire development team can write statements like these. Indeed, most companies already have a written collection of statements that articulate use cases and user journeys. Many such statements can be placed into a file for import into the Functionize NLP engine, which can rapidly and intelligently process all of the statements in the import file—and generate each of the steps for the test case. It’s easy to modify those test cases when the import process completes.
Our Adaptive Event Analysis (AEA) engine can then run all of the test cases with live user data. You could import a comprehensive test plan in the same way. Functionize can employ the tests—that have been built automatically with ALP—in visual testing, cross-browser testing, mobile testing, and performance/load testing. You’ll get quick feedback since Functionize has the power of a massively scalable cloud infrastructure at its disposal. Also, if you combine the power of the NLP engine with our Root Cause Analysis tools, you also enjoy a significant reduction in effort on diagnostics and maintenance.
Functionize combines all phases of testing into a smooth, seamless testing experience. When combining NLP with the AI and machine learning that are integral to the Functionize platform, the result is accurate, meaningful—and flexible—testing that enables greater delivery momentum and solid support for agile development. This is an incredibly powerful set of tools in a one-stop solution that is ready to empower better UX for your customers.
Optimizing the user experience is a critical component to the success of any company. The Functionize autonomous test platform captures and reports all elements of this experience, enabling companies to deliver a superior experience, reducing customer churn and elevating customer satisfaction—both of which will serve to improve your bottom line.