Why Testing Automation Hasn’t Reduced the QA Cycle

Testing automation is now an engineering concern. The culture of QA testing of new applications is presently devolving from straightforward validation and verification to a second level development complex in which testers need software engineering skills. This is a problem because it is neither desirable nor efficient to limit testing capability to developers or engineers. The current situation in QA is a fuzzy DevOps boundary where engineers script complicated test cases using a bevy of oddball tools like Cucumber and Jenkins the butler with ad hoc languages like Gherkin. These scripted test case do regression testing effectively but at an extraordinary cost. So fragmented and disparate is the array of tools that Anaconda-like distribution packages of tools have appeared with the guise and pretense of integrating them into one singularity. Amalgams of open source tools like Cypress claim to be end-to-end; they’re really a patchwork of tools targeting various phases of testing. How did software testing get so complicated?

Coding is required

Groovy and JavaScript coding are now also standard requirements for BDD regression test scripting involved in the substantial task of building test cases. Although scripting has the product of an automated test, the scripting itself is a technical tedium rivaling other forms of development. The technical skill requirements of QA Engineers is a cost and complexity overhead which most enterprises now strain to accommodate. Needed is a new wave of intelligent testers who implement smart tools which do not require scripting. Functionize supplies this resource today. Functionize is a truly smart platform capable of learning. Functionize empowers nontechnical staff to generate comprehensive test suites without scripting because Functionize learns how to test your application. Now you can staff QA with intelligent people who can apply true automation testing in the form of Functionize. Your most talented engineers shouldn’t be confined to maintaining their tests with legacy test tools. 

How Testing Automation became so Complicated

But how did we cross this Rubicon? How did QA slip down this narrow rabbit hole? After the widespread adoption of Agile, QA was suddenly the slowest runner in the relay – the continuous delivery relay, that is. Suddenly gating and release queued at QA’s door. The solution to one problem often reveals one or more new problems, and that is the case with Agile and QA. The success of Agile software development teams actually created new pressure in gating and release management of software updates and revisions. In a military-style hurry up and wait, Agile enabled rapid software construction and envisioned equally rapid gating, release, and delivery. However, QA testing, versioning, and deployment were not prepared to run with Agile speed, and a new set of workflow issues arose, a sort of bottleneck.

The Purpose of DevOps and the era of QA Engineers

The point of DevOps is to embrace and automate all phases of software development, ideally including the new implementation pipeline, which further encompasses continuous integration, continuous testing, and continuous deployment. Devops envisions boosting QA’s speed and efficiency to equalize with or extend Agile by scripting test cases. This scripting is the new second level development complex mentioned above; it is commonly called automation testing. This boost in speed originally looked feasible when developers could script tasks like deployment to virtual machines, load testing, and building in containers like Docker. Engineers can code unit testing and regression testing in builds on both server and client with Node.js. And in fact it does accelerate the testing procedures; that’s why this conundrum – because it does work. The problem is that now you need engineers to test new code. This is actually the success and denouement of DevOps.

Why is testing is another dev phase?

But wait a minute… If DevOps was supposed to integrate everything, why did testing become yet another development phase? Because widespread automation testing tools do not contain any intelligence. The vast majority of testing frameworks need developers to program them! Tools look smart to coders because coders know how to use them. That’s the unintended consequence. Tools like Selenium are great if you are a programmer or a developer. Let’s look at a great idea that is a technical failure.

MS Coded UI is supposed to record and replay tests. Undoubtedly the intention was to create a dashboard capable of testing a user interface by recording and replaying tests. Ideally, this would be useful to non engineering testers. But there is a bug in the idea.  

You can’t replay the same old test if the code has changed, and there is no reason to test the code unless it changes. Therefore, Coded UI needs an engineer to edit test scripts before they can be reused in regression testing.

Only for engineers?

So, Coded UI works, but only for engineers. A non-tech tester can certainly record a test, as with Selenium or any other playback tool, but they can’t replay that test and rely on it after a code change. The script created by Test Builder will need revision, or the test will have to be recorded again. If that’s the case then there is no automation in the system whatsoever. And that was the whole point right? If tools need manual scripting then we cannot call it automation testing. We need to put smart testing tools in the hands of intelligent testers.

Software updates change things in subtle ways which even a brilliant coder cannot always anticipate. So the test needs to change. Here the core problem with Devops is revealed: Devops requires engineers to script test cases. Business people generally cannot script a test in Gherkin language even though it is supposedly a “business-facing” language. If a tester records a scenario with Selenium very likely the auto-generated script will need revision in subsequent test cycles. Those scripts can be edited and customized, but a developer with coding skills is required for the task. This looked good originally, or at least it looked like a reasonable solution, but it has now spiralled up into a cloud of complicated automation tools – effectively one automation tool for every technology! And this complexity grows every day.

Unintended Consequences

The monolithic unintended consequence of Devops was the creation of a new second level development complex, a complex of developers in QA. Fortunately, there is now a solution. Functionize solves the backlog of problems created unintentionally by the success of Agile. Functionize runs as a lightweight browser plugin which observes and learns as you test a user interface in your web app. Functionize learns from the assertions you create in the test cycle. But instead of generating scripts for engineers to edit later, Functionize knows how to update and revise its own code. Functionize removes the engineer from QA. We put smart testing in the hands of intelligent people, and liberate engineers to focus their creative energy on development.

Although Devops intended to reduce development cycles, it actually created a new development phase and installed it in QA. This is not aligned with business objectives. Now there is a truly intelligent alternative which solves the unintended consequence of Devops: Functionize ensures the integrity of testing but delivers us from the scripting quagmire. We do this in part with a novel patented technology of machine learning which our data scientist calls Adaptive Event Analysis.

How Adaptive Event Analysis shortens the QA Cycle

Functionize brings a new technique to intelligent automation testing. Adaptive Event Analysis is a self-healing function for test cases in which our machine learning based modules learn to self-correct by observing events and assertions in previous test cases and comparing them to new events in evolving scenarios. This breakthrough technology reduces test maintenance and is an innovation in the field of machine learning as applied to automation testing of software.

Before Functionize, the state-of-the-art testing systems suffered from stationarity. A stationary process is a stochastic process in which parameters such as mean and variance do not change over time. However, evolving scenarios in a testing environment do not satisfy these conditions. Enter Functionize. Functionize’s AEA relies on building autoregressive integrated moving averages, which adapt to functional changes of a website. No longer is analysis done in a stationary manner. Functionize introduces the ability to dynamically adapt to a software platform.

In addition, Functionize builds Long Short Term Memory (LSTM) models, a type of recurrent neural network, which are capable of forecasting test case events. Test anomalies can be easily identified as outliers in model simulation. Self-healing test cases now become possible.

 

Sign Up Today

The Functionize platform is powered by our Adaptive Event Analysis™ technology which incorporates self-learning algorithms and machine learning in a cloud-based solution.