Getting Past the Hype of Autonomous Testing

An overview of challenges associated with autonomous testing including the difficulty of maintaining test scripts, varying software development processes, etc.

An overview of challenges associated with autonomous testing including the difficulty of maintaining test scripts, varying software development processes, etc.

February 27, 2018
Geoffrey Shenk

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
An overview of challenges associated with autonomous testing including the difficulty of maintaining test scripts, varying software development processes, etc.

In 2017, machine learning became a worldwide buzzword—and it now such that it seems that all product offerings can only garner attention if they are touted as being capable of machine learning. Although artificial intelligence / machine learning (AI/ML) technology has been employed in the software development industry for at least two decades, we’ve come to the point at which machine learning is commonly marketed as complete solution—instead of only being a factor in the solution.

There is no shortage of folks who tout the promise of artificial intelligence and machine learning as all-inclusive, pointing us to the day in which we’ll watch a fully-intelligent software application autonomously and exhaustively perform testing on other software apps. But many industry leaders have serious doubts. We suspect that you, dear reader, also have some reservations. 

To most, machine learning is a black box. The problem is that too many users—and many buyers!—are essentially clueless about how it works. Since they lack this understanding, they are incapable of properly evaluating its function or value. It may surprise you to know that too many product vendors don’t fully understand the ML feature of their own products. They can’t demonstrate the pathways to the results, they gloss over the benefits and find themselves in the embarrassing situation in which they provide an entirely unsatisfactory explanation to software and QA professionals.

Far, far away?

Compounding all of this is the reality that AI/ML technology is still a long way from simulating human levels of intelligence. We are nowhere near the dream of achieving a human-machine unification that futurists believe will help us realize our fullest potential.

In the past few years, various attempts at machine learning (ML) adoption have been shown to somewhat disrupt a number of products and services. But this disruption doesn’t equate to a valuable technological advancement. Here are only a few recent headlines that exhibit the increasing skepticism, tentative assent, and tempered enthusiasm throughout the industry: 

Gartner dubs machine learning king of hype: 

https://www.infoworld.com/article/3108429/artificial-intelligence/gartner-dubs-machine-learning-king-of-hype.html 

Cutting Through The Machine Learning Hype 

https://www.forbes.com/sites/valleyvoices/2016/11/16/cutting-through-the-machine-learning-hype/?sh=3dbfae745465 

Do you need AI? Maybe. But you definitely need technology that works: 

https://www.ciodive.com/news/the-skeptics-guide-to-artificial-intelligence/441674/ 

The time is well-nigh for ML product vendors to buck up and prove their worth. Otherwise, let’s all simmer down and calmly resolve to navigate toward testing automation that truly adds value to software development efforts.

A lesson from the push for self-driving automobiles

In May of 2016, a “self-driving” Tesla Model S was involved in a fatal crash in Florida.  The car was traveling down a road in Autopilot mode when a large tractor-trailer truck turned left in front of the vehicle. The Tesla continued underneath the trailer at a speed high enough that its entire roof was sheared off. Obviously, the “autonomous” Tesla Autopilot entirely failed to recognize a very large object, resulting in the death of Joshua Brown.

Soon after the incident, Tesla published a response—expressing condolences and outlining the relevant Tesla safety procedures. Most importantly, Tesla states that Autopilot is disabled at first, that the driver must take action to enable this mode, and the driver must expressly confirm understanding that this is technology is provided in the context of a beta-phase delivery. After enabling Autopilot, on-screen warning information declares that it is a driver-assist feature. It is necessary for the driver to keep a hand on the steering wheel at all times and maintain control of the vehicle. 

There are clear analogies between this approximation to autonomous driving and the current state of software development and QA testing automation.

The Challenge of Testing Automation

For many years, testing tool vendors have made various promises about ever-higher degrees of test automation. For many months now, these and other vendors have been promising to bring machine learning to QA. But the facts don’t support the claims. Nearly all such vendors have not attained measurable business outcomes that might arise from their automation efforts.

Many testing platforms have outdated architectures

Well into the 21st century now, the industry is such that the most common software testing products still rest on foundations of old technology. This is quite surprising since many application and enterprise architectures continue to evolve. It is very rare, for example, to find any vendor or development team that is building or maintaining a client/server application—or releasing software on discrete, quarterly cycles. It would prove difficult to find a testing team that is given an entire quiet month of testing prior to product launch. Shoehorning new functionality into an old platform doesn’t provide a good solution, and often adds complexity that actually increases costs while decreasing efficiency.

Test scripts are difficult to maintain

If the application continues to change with continuous development, it becomes increasingly difficult to synchronize the test scripts. On non-trivial applications, many teams find that it can be easier and quicker to create new tests than it is to maintain existing ones. Not only does this bloat the test suite, but more false positives will appear as the development team continues to press forward. Together with the code, the new scripts are susceptible to defects—and defects in the scripts are likely to cause additional false positives or interrupt test performance.

Software architectures is altogether different

The latest releases of most software products are drastically different from the architectures of even a few years ago. Also, the technology mixture has grown increasingly complex and larger in size. The industry continues to move swiftly away from client/server and mainframe systems toward cloud computing, APIs, microservices, and a vast universe of mobile applications and the Internet of Things (IoT).

At least two primary challenges face the community of testing professionals as we seek to get forward traction on automation:

  • It’s necessary to have a high level of technical expertise or business abstraction to test these technologies without getting into low-level details.
  • Different components of the application evolve at varying rates, and tend to create a process desynchronization.

Software development processes are quite different

Although many companies still maintain some waterfall development processes, there’s clear trend towards short iterations and smaller delivery packages. Release frequency has come down from quarterly to weekly or daily. This release cycle compression puts great strain on testing teams that must wait weeks or many days to prepare the environment and the test data.

Ownership of quality assurance is shifting

In response to need for shorter release cycles, many teams work to shift some of the testing upstream. Effectively, the developers take on more responsibility for ensuring quality code to assist in reducing the burden on the testers and achievable more reliable code further upstream—to have a chance of release on time. But, developers typically lack the necessary and—or sufficient time—to perform end-to-end testing.

Open-source tools are changing the landscape

The availability of open-source testing tools such as SoapUI and Selenium has been both beneficial and detrimental. Typically, an open-source testing tool is built to solve a particular problem for a specific type of user. Selenium, for example, is now a very popular tool for testing browser and web interfaces. Though Selenium is fast and agile, it doesn’t support comprehensive testing—across apps, databases, APIs, mainframes, and mobile interfaces. While it’s true that most applications today feature a web browser  UI that will require regular testing, a browser or web API interface are only a small proportion of the many components in an complex business process. API and SoapUI testing have this same limitation.

Where do we go from here?

Clearly, software testing must improve. These challenges cannot be squarely addressed by continuing to use legacy tools and processes. Disruptive methodologies such as Continuous Integration and Delivery,  DevOps, and Agile development are propagating across many industry segments. As this movement continues, software testing will become more central in making data-driven decisions when managing software releases. To continue making steady progress, organizations must acquire technologies that cultivate Continuous Testing. Otherwise, innovation will remain shackled to cumbersome, now-ineffective legacy testing tools.

Before bring this article to a close, let’s consider the elements of a solid test automation strategy that is worth striving for in the near future.

  • Automation process design — It’s important to take care in designing test automation processes. As much as possible. think how you can automate the entire pipeline. This includes project timeline, estimation, testing schedules, and testing environments.
  • Automation architecture — Though scripting knowledge historically has been very important in automating any testing, it’s important to think hard about which automation framework is best for your environment, and now to get familiarized with new scriptless frameworks.
  • Automating test case creation and execution — After deciding on the test automation framework, work can begin on building automation test cases. It’s important to prioritize key test cases.
  • Maintenance and monitoring — While test automation tools can provide excellent reports, it’s also important to cross-check with the execution logs. After logging defects into bug tracking tools along with the screenshots. So, after this, we need to enhance our test cases & test automation frameworks in this phase.

Final Thoughts

Many professionals fully realize the importance of software testing, but perhaps don’t have the time to stop and think of a way forward beyond our conventional testing practices. Though many companies are lured by the hype of how machine learning might boost software testing, have begun investing in test automation before conducting a thorough analysis of what is truly best. Functionize has laid out a compelling vision of how an intelligent test agent helps autonomously eliminate test QA sinkholes.