In 2017, machine learning became a worldwide buzzword—and it now such that it seems that all product offerings can only garner attention if they are touted as being capable of machine learning. Although artificial intelligence / machine learning (AI/ML) technology has been employed in the software development industry for at least two decades, we’ve come to the point at which machine learning is commonly marketed as complete solution—instead of only being a factor in the solution.
There is no shortage of folks who tout the promise of artificial intelligence and machine learning as all-inclusive, pointing us to the day in which we’ll watch a fully-intelligent software application autonomously and exhaustively perform testing on other software apps. But many industry leaders have serious doubts. We suspect that you, dear reader, also have some reservations.
To most, machine learning is a black box. The problem is that too many users—and many buyers!—are essentially clueless about how it works. Since they lack this understanding, they are incapable of properly evaluating its function or value. It may surprise you to know that too many product vendors don’t fully understand the ML feature of their own products. They can’t demonstrate the pathways to the results, they gloss over the benefits and find themselves in the embarrassing situation in which they provide an entirely unsatisfactory explanation to software and QA professionals.
Compounding all of this is the reality that AI/ML technology is still a long way from simulating human levels of intelligence. We are nowhere near the dream of achieving a human-machine unification that futurists believe will help us realize our fullest potential.
In the past few years, various attempts at machine learning (ML) adoption have been shown to somewhat disrupt a number of products and services. But this disruption doesn’t equate to a valuable technological advancement. Here are only a few recent headlines that exhibit the increasing skepticism, tentative assent, and tempered enthusiasm throughout the industry:
Gartner dubs machine learning king of hype:
Cutting Through The Machine Learning Hype
Do you need AI? Maybe. But you definitely need technology that works:
The time is well-nigh for ML product vendors to buck up and prove their worth. Otherwise, let’s all simmer down and calmly resolve to navigate toward testing automation that truly adds value to software development efforts.
In May of 2016, a “self-driving” Tesla Model S was involved in a fatal crash in Florida. The car was traveling down a road in Autopilot mode when a large tractor-trailer truck turned left in front of the vehicle. The Tesla continued underneath the trailer at a speed high enough that its entire roof was sheared off. Obviously, the “autonomous” Tesla Autopilot entirely failed to recognize a very large object, resulting in the death of Joshua Brown.
Soon after the incident, Tesla published a response—expressing condolences and outlining the relevant Tesla safety procedures. Most importantly, Tesla states that Autopilot is disabled at first, that the driver must take action to enable this mode, and the driver must expressly confirm understanding that this is technology is provided in the context of a beta-phase delivery. After enabling Autopilot, on-screen warning information declares that it is a driver-assist feature. It is necessary for the driver to keep a hand on the steering wheel at all times and maintain control of the vehicle.
There are clear analogies between this approximation to autonomous driving and the current state of software development and QA testing automation.
For many years, testing tool vendors have made various promises about ever-higher degrees of test automation. For many months now, these and other vendors have been promising to bring machine learning to QA. But the facts don’t support the claims. Nearly all such vendors have not attained measurable business outcomes that might arise from their automation efforts.
Well into the 21st century now, the industry is such that the most common software testing products still rest on foundations of old technology. This is quite surprising since many application and enterprise architectures continue to evolve. It is very rare, for example, to find any vendor or development team that is building or maintaining a client/server application—or releasing software on discrete, quarterly cycles. It would prove difficult to find a testing team that is given an entire quiet month of testing prior to product launch. Shoehorning new functionality into an old platform doesn’t provide a good solution, and often adds complexity that actually increases costs while decreasing efficiency.
If the application continues to change with continuous development, it becomes increasingly difficult to synchronize the test scripts. On non-trivial applications, many teams find that it can be easier and quicker to create new tests than it is to maintain existing ones. Not only does this bloat the test suite, but more false positives will appear as the development team continues to press forward. Together with the code, the new scripts are susceptible to defects—and defects in the scripts are likely to cause additional false positives or interrupt test performance.
The latest releases of most software products are drastically different from the architectures of even a few years ago. Also, the technology mixture has grown increasingly complex and larger in size. The industry continues to move swiftly away from client/server and mainframe systems toward cloud computing, APIs, microservices, and a vast universe of mobile applications and the Internet of Things (IoT).
At least two primary challenges face the community of testing professionals as we seek to get forward traction on automation:
Although many companies still maintain some waterfall development processes, there’s clear trend towards short iterations and smaller delivery packages. Release frequency has come down from quarterly to weekly or daily. This release cycle compression puts great strain on testing teams that must wait weeks or many days to prepare the environment and the test data.
In response to need for shorter release cycles, many teams work to shift some of the testing upstream. Effectively, the developers take on more responsibility for ensuring quality code to assist in reducing the burden on the testers and achievable more reliable code further upstream—to have a chance of release on time. But, developers typically lack the necessary and—or sufficient time—to perform end-to-end testing.
The availability of open-source testing tools such as SoapUI and Selenium has been both beneficial and detrimental. Typically, an open-source testing tool is built to solve a particular problem for a specific type of user. Selenium, for example, is now a very popular tool for testing browser and web interfaces. Though Selenium is fast and agile, it doesn’t support comprehensive testing—across apps, databases, APIs, mainframes, and mobile interfaces. While it’s true that most applications today feature a web browser UI that will require regular testing, a browser or web API interface are only a small proportion of the many components in an complex business process. API and SoapUI testing have this same limitation.
Clearly, software testing must improve. These challenges cannot be squarely addressed by continuing to use legacy tools and processes. Disruptive methodologies such as Continuous Integration and Delivery, DevOps, and Agile development are propagating across many industry segments. As this movement continues, software testing will become more central in making data-driven decisions when managing software releases. To continue making steady progress, organizations must acquire technologies that cultivate Continuous Testing. Otherwise, innovation will remain shackled to cumbersome, now-ineffective legacy testing tools.
Before bring this article to a close, let’s consider the elements of a solid test automation strategy that is worth striving for in the near future.
Many professionals fully realize the importance of software testing, but perhaps don’t have the time to stop and think of a way forward beyond our conventional testing practices. Though many companies are lured by the hype of how machine learning might boost software testing, have begun investing in test automation before conducting a thorough analysis of what is truly best. Functionize has laid out a compelling vision of how an intelligent test agent helps autonomously eliminate test QA sinkholes.