Continuous Testing - The Good, The Bad, and the Ugly
What is continuous testing? How it differs from testing automation? How can it minimize business risks? These and much more inour article!
What is Continuous Testing?
Test automation produces a set of failure/acceptance data points that correspond to product requirements. Continuous testing has a broader scope across more of the development cycle, focuses more on business risk, and provides more insight on the probability that a product is going to be shippable. It’s a shifting in thinking, and a broadening of processes in which the stakeholders change the driving question. In CT, it’s no longer merely sufficient to ask (late in the cycle), “Is testing done now?”. For teams that can achieve it, it’s far better to get a confident answer to this question:
“With this latest cycle iteration, are we now at the point at which the release candidate has an acceptably low level of risk to the business?”
Continuous testing is a framework for running automated tests—as early as practicable and across the product delivery pipeline—in which the results of these tests quickly provide risk exposure feedback on a specific software release candidate.
The promise of continuous testing is faster delivery of higher quality software.
Generally, the goal is to achieve higher speed with higher quality, by moving testing upstream and testing with a higher degree of frequency. It’s easy to ship a software product if testing is minimal; and it’s easy to get out good software if you’ve got a whole year to dump a feature. Test early, test often, test exhaustively, and get the payoff in higher quality products that potentially release sooner.
The price for all of this? You’ll need to reconfigure your delivery pipeline. Full-bore continuous testing includes not only code coverage, functional quality, and compliance, but also impact analysis, and post-release testing.
The Need for Continuous Testing
Changes in software development continue to increase stress on testing teams—like never before. Also, the complexity of newer technologies and components present more challenges in achieving test automation with conventional methods and tools.
Extensive, complex application architectures — software tools and technologies continue to become more complex, cloud-connected, distributed and expansive with APIs and microservices. There is an ever increasing number of combinations of innovations, application components, and protocols that interact within a single event or transaction.
Frequent releases/continuous builds — DevOps and Agile continue a big push toward continuous delivery, and this brought the industry to the point at which no small number of applications are release-ready builds many times per day. This is only possible when significant effort has been put into the product lifecycle to automate testing and assess risk of failure. It also means that end-of-cycle testing time must have a much short duration.
Managing risk — software is a primary business interface, so any application failure translates directly to a failure for the business. A “minor” glitch will have a serious negative impact if it significantly affects user experience. For many software vendors and service providers, application integrity risks are now a critical concern for all business leaders.
How does CT differ from Testing Automation?
We can categorize the main differences between test automation and continuous testing with categories: risk, broadercoverage, and time.
Minimizing Business Risk
Today, most businesses not only exposed many elements of internal applications to external end users, they have also built many types of additional software that extend and complement those internal applications. Airlines, for example, provide access to their previously-internal booking systems. They also provide extensions to these systems so that customers can browse, estimate, and book all aspects of a vacation— flights, hotels, rentals, and extra activities. These integrations are proving to be quite innovative—but this also tends to increase the number of failure points.
Major software application failures have brought serious repercussions to the extent that that software-related risks are now high-profile aspects in many business financial filings. On average, recent statistics suggest that notable software failures result in an average 4% stock price decline—about $2.5 billion reduction in total market capitalization. This is a direct hit to the bottom line, so business leaders are putting more pressure on their IT leaders to find a remedy.
Go back to the need for continuous testing: if your test cases haven’t been built to readily assess business risk, then the results won’t provide the feedback necessary to continually assess risk. The design of most tests is to provide low-level detail on whether requirements/specifications have been met. Such tests give no indication of how much risk the business would take if the software was released today. Think about this: Could your senior management intelligently make a decision to cancel a release according to test results? If the answer is no, then your tests are out of alignment with your business risk assessment criteria.
Let’s be clear: This is not to suggest that granular testing isn’t valuable. The point here is that the software industry has a long way to go in preventing high-risk release candidates from being sent into the wild.
Even when a company manages to avoid the detriments of large-scale software failures, it remains true that a supposedly minor defect can cause major problems. If a user evaluation results in an unsatisfactory experience or fails to meet expectations, there is a real risk that the customer will consider your competitors. The is also the risk of damage to the brand if any user takes his complaints to news media.
Merely knowing that a unit test fails or an interface test passes doesn’t tell you the extent to which recent app changes will affect user experience. To maintain continuity and satisfaction for the user community your tests must be sufficiently broad to detect application changes that will adversely impact functionality on which users rely.
Accelerating the Delivery Pipelines
The speed at which organizations ship software has become a competitive differentiator, so a majority of companies are looking to DevOps, Agile, and other methodologies to optimize and accelerate delivery pipelines.
In its infancy, automated testing brought testing innovations to internal applications and systems that built with conventional, waterfall development procedures and processes. Since these systems were fully under the control of the organization, everything was dev-complete and test-ready at the designated start of the testing phase. With the rise of Agile and DevOps, the expectation is forming in many companies that testing must start very soon after development begins. Otherwise, the user story itself won’t be tested. Rather it will be assumed to “done-done” and forgotten because of the intensity that is typical with short-duration sprints (about two weeks).
Some highly-optimized DevOps teams are actually realizing continuous delivery with consistent success. These teams can often deliver releases every hour of the day—or more frequently. Feedback at each step in the process must be virtually instantaneous.
If quality isn’t a critical concern at your company—minimal disincentive for rolling back when defects are found in production—then it might be sufficient to quickly run some unit and smoke tests on the release. If, on the other hand, your management and your team have got to the level of frustration that drives you to minimize the risk of releasing defective software to customers, then you might be searching for a way to achieve solid risk mitigation.
For testing, there are a number of significant impacts:
- To be effective in continuous delivery pipelines, testing has to become an integral activity for the entire development cycle—instead of continuing to be seen as a hygiene activity that occurs post-development.
- As much as possible, tests should built concurrently and be ready to execute very soon after the new functions or features are built.
- The entire team should work together to analyze and determine which tests should be run at specific points in the delivery pipeline.
- Each test suite should be configured to run fast enough to avoid any bottleneck in a particular stage in the software delivery pipeline.
- Environment stabilization is important to prevent constant changes from raising false positives.
To adequately realize some of the benefits of CT, a cultural shift must first get underway. Cultural change begins with a change in thinking. It can be helpful to think of testing as a product readiness assessment.
One perspective on continuous testing is to view it as continuous assessment. With a high degree of frequency—all along the pipeline—development and QA staff should be constantly inspecting code and asking: Is it ready, yet? Is it better? Is it worse? While a few product companies may claim to operate well in CD/CT programs, it is wishful thinking for most development teams.
Continuous testing, if achievable, can significantly minimize business risk. It’s important to go up a level and think of continuous testing in strategic terms. A primary strategic goal for any product company is to reduce business risk in releasing applications, such that—at minimum—new code won’t frustrate or alienate customers. Test automation is a tactical activity that contributes to overall continuous testing goals.
For continuous testing, the focus shouldn’t be on unit testing details, or proper code formatting, how many bugs were found. Though that is part the entire pipeline, the most critical concern in CT is the risk to the business. Technical risk is a lesser concern. The guiding questions should ever be: Is the product release-ready? Will our customer continue to maintain high levels of satisfaction when they use the updated product?
Ready for more discussion on continuous testing? This is the first in a two-part series on continuous testing. In the next article, we’ll look at the challenges, scope, and pursuit of best-practices in continuous testing.