Overall, most bugs are small and easy to fix. But many small problems require a set of tedious steps to remedy—from the time the tester finds the bug to the point at which the developer applies the fix and QA gives the nod. Some bugs are the size of the Amazon river—with all its tributaries. The fixes for these take weeks of effort and thousands of more lines of code. But, that’s not all. This new code is resubmitted to QA, and often generates different species of bugs. Rinse and repeat. The project runs late, and everyone is tense.
Many developers are driven by a passion for creativity in building new things. But, bug-fixing largely consists of maintaining compatibility with existing functionality that resides within older things. However, the best way to reduce bug-fixing effort is to put more time into testing code further upstream—as it’s developed. This perhaps is the most important factor in avoiding the late-buggy-release spiral.
The immediate benefit of good initial quality is that testers will find fewer bugs, and those bugs are likely to be lower severity. The fixes for these bugs will probably not have to be a short-notice firefight. Also, risky shortcuts that might generate other bugs will be unnecessary, and developers can devote more creative effort into building new features—instead of fixing bugs. Build more, fix less. Testers will have more new features to validate, and fewer bugs to report. What’s not to love about that?
A major reason that this is so difficult for many teams to achieve is that many developers have an aversion to testing—until they allow themselves to be persuaded of its importance.
Let’s just look at one slice of the programming pie. Consider, for example, the biggest tech companies in the world. Google, Facebook, Sony, TransUnion. They suppose they are hiring the smartest programmers in the world. Yet, these highly-trained and well-paid developers seem to produce a sizeable mountain of unsecured code. The proof of the pudding is in the eating.
Permit your humble servants here at Functionize to put forward a few questions those who think their code is perfect: How would you know that your code runs perfectly? Did you ever actually test it?
Don’t get us wrong. We love programmers. One thing, in particular, that is missing from the minds of many—including those who know their code isn’t perfect—is a solid appreciation of what it means to actually test software!
“Is it really necessary to test that code? It ran on my computer and functioned perfectly, so we can go ahead and ship it.” Sound familiar?
Let’s step back and remind ourselves that the purpose of testing is to minimize risk. The objectives of software testing do not consist in finding bugs or enhancing features. Rather, the aim is to reduce risk technical and business risk by expressly finding and fixing defects that would significantly impact user experience.
Negative impacts can occur either in multiple occurrences of a small problem, or a single occurrence of a severe problem. Let’s say that there is a bug in an accounting software product that causes it to seize each time a user enters a value greater than 10,000. The impact here would not be large, but the frequency would likely be annoying to any user. If instead, there was another type of bug that would corrupt all data at the time the cumulative number of file save operations exceeds 10,000, this would be an indisputably catastrophic, unacceptable impact—even though its occurrence was quite infrequent!
It is impossible to find all bugs in an application or other unit of software. Except for small applications, it’s quite impracticable to test all possible inputs. No, no, the task is not to locate every possible anomaly or defect, but rather to verify all the functions within the software up against a specification.
Minimizing negative impact is best achievable by prioritizing all functions so the primary focus is those areas that have the most potential for failure or negative impact—which areas have the most risk. Decisions must then be made on which tests should be run to validate specific functionality. When actual usage is seen to deviate from the requirements specification, the defect can be recorded and fixed according to severity. Some bugs will get fixed, and low-impact bugs will be documented and remain in software.
These complaints about unit testing will likely be familiar, and it’s important to explicitly consider why they might be unjustifiable.
Since the code is quite simple, there’s no need for the programmer to test it — Not only does unit testing prevent bugs from swimming downstream, it also improves reliability and stability. Programmer testing tends to promote better design and maintainability.
Unit testing has no net value — Actually, unit testing offers several benefits for a development pipeline. Since many problems are found upstream, there is no impact to other software components. With a mindset on passing unit tests, a programmer is also more likely to build the code with a higher degree of resiliency.
Unit testing hinders productivity — The reality is that productivity increases for the entire team. Time spent testing upstream will save much more time downstream which, recursively, saves programmers time in fixing bugs that are found by QA. More time can be spent on building more features.
Testers should be responsible for unit testing — There has to be some degree of responsibility that rests with the programmers, and they should provide a high degree of assurance, both to themselves and the testers, that the code will function according to specifications. Testers can then focus their efforts on complex test cases in the larger scope of the entire application and the integration with other systems.
The integration tests make unit tests unnecessary — There is a tendency to think that systemic testing will fully cover everything. This is to operate under a fallacy of decomposition. Consider what will happen if one of the software components or units is to be used in a different application, integration, or context.
Beyond preventing the unnecessary waste of QA effort, there are other reasons to perform unit testing on all new code:
Efficiency — Think about the productivity loss in correcting a small coding error such as a typo, after a developer checks it in for validation. In comparison with the developer, it takes more time for the tester to open the files, find and fix the error, and rebuild. This is surely a terribly inefficient way to go about fixing a simple typo.
Immediate self-improvement — The bugs that are most easy to fix are the ones that never happen. By testing their work immediately after performing it, developers may get a better sense of why they make certain common mistakes, and learn to avoid repeating them.
Losing the plot — Code is easier to fix when you’ve got the ideas ready in your mind. In complex work like software development, memory can fade quickly. If the report-back time for a defect is more than a day or two, a busy programmer may spend considerably more time reacquainting with and researching an issue that might have been quite obvious after running a unit test.
Technical insight — Typically, the developer has the clearest view of the technical details for building the software, and this is most useful in anticipating potential mistakes or omission. This position and skill mean that it’s highly likely that the developer will discover many more problems for the same amount of effort that a tester would spend looking for bugs.
The ideal mindset is for developers to consider testers to be equivalent to customers. Professional, quality work products don’t contain readily detectable flaws. Since customers expect this, then testers can expect it from developers.
Here’s a list of the minimum things to check before declaring code to be QA-ready:
Food for thought: If developers thoroughly check their work, then why bother with QA? Firstly, just as developers have technical knowledge and skills that guide unit testing, they also tend to make assumptions that perpetuate blind spots. So, it can be equally valuable to have another team member who thinks differently from developers to examine the code.
In a successful development team, testing must become the responsibility of everyone—to some degree. Tap into and leverage the particular strengths and perspectives, for developers and testers. Learning from each other and cultivating a passion for quality products will pay dividends well into the future. Defects can be found and fixed well upstream, where they are much less costly to eliminate.
In many teams, unit testing is optional. But the level of quality is—to some extent—a personal responsibility. When building a product, it’s is entirely up to the developer to decide whether you want to meet a high standard. Or not. If a team member does have concerns about quality, then s/he should vigorously participate in verifying and improving the software.
The amount of effort that goes into quality development says much about the engagement that the developer wants with clients and the responsibility that is taken for delivering high caliber products. Unit testing may be only one small piece of the overall effort, but its benefits are more than worth the effort. The adverse reaction that many developers have to testing is proportionally a result of the poor testing tools available to them. Functionize's AI-driven Test Cloud greatly reduces the time a developer needs to spend on testing by eliminating legacy test automation sinkholes.