Which Part of the Testing Process is the Most Time Consuming?

To do the testing properly, work, and focus on finding potential gaps and conflicts. Learn here about how test engineers do the test process.

To do the testing properly, work, and focus on finding potential gaps and conflicts. Learn here about how test engineers do the test process.

January 6, 2021
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
To do the testing properly, work, and focus on finding potential gaps and conflicts. Learn here about how test engineers do the test process.

Read on to learn how test engineers weigh in on this question.

Elias Hoummadi
Elias Hoummadi, Mechanical Engineer at Titoma Design for Manufacturing.

Testing for certifications

We work with electronics. Something that consumes a lot of our time regarding testing is testing for certifications. 

EMC (Electromagnetic Compatibility) to be exact. 

If, by the time we get to testing for electromagnetic compatibility, the waves are still not within limits, then we're in for a several months delay before we can release the product to our clients. 

We need to find the place where the noise is and most likely do a redesign of the board. 

That's why from the start, we like to know what certification we'll need to pass so we can "Design for Certification" from the very beginning. 

 

David Vella
David Vella is an independent Systems Engineering Consultant drawing on more than a decade of experience to help customers in various industries make sense of their data, improve their processes, and perform better. Find him at David Vella.

Poor requirements capture

To start with, I would be inclined to believe that one would be interested in understanding which activities are the most time consuming, with the ultimate goal of improving efficiency and effectiveness - which in theory translates to improved agility. With that in mind, the crux of the matter would not be to reduce the time spent on useful tasks but rather the minimization of wasted time. 

I've been involved with system testing at different levels and at different steps of the development life-cycle throughout my career. I've been a developer, calibrator, tester and I've also been responsible for integrating and managing all of these activities. 

In my personal experience time wastage almost always inevitably boils down to poor requirements capture and definition at the start of the project or program, i.e., inadequate "left-shifting" and therefore not enough time and resources spent upfront to ensure that the goals are clearly set, well documented and communicated. Additionally, it is very common that not enough is invested in use-case scenario analysis. 

A combination of these shortcomings then sets off a ripple effect. Development activities become based on dubious requirements, the verification level testing might even not flag any issues because the results would match up the requirements at that given system level. Eventually, the cracks are likely to only show up at system level validation stage testing. Even worse, they may show up after delivery/release! (This is particularly tragic in a scenario where the customer is the "public.”) 

In parallel with what I've covered above, when the requirements are poorly captured, defined, and communicated, there is an increased tendency of scope change and/or scope creep. The result of this, amongst other things, is uncertainty surrounding the test procedures themselves throughout the life-cycle. In turn, this can create tension between developers and testers as the expected system outcomes become increasingly unclear and subjective, or "open to interpretation.” 

This last point I've made somewhat ties up with my next one. The testing procedures should be based on principles. What I mean is that both the 'how' and the 'why' of the test in question need to be understood. This guarantees that the test procedures can be adapted effectively and efficiently in light of changing or new system requirements. The change in requirements cannot always be completely avoided and is not always due to bad practice - the system context changes during its own life cycle, in some cases even during development phases. Good systems engineering practices can help reduce this eventuality, but that is another story for another day. 

Thomas Grazynski
Thomas Grazynski, Quality Assurance Engineer at brainhub.eu.

Analysis of project requirements and design

During the last decade of working as a software tester, I learned that it's worth sacrificing most of the testing time on the analysis of project requirements and design. To do the testing properly, engineers have to imagine how the final product should work and focus on finding potential gaps and conflicts. Thanks to this approach, errors can be found and fixed early in the process, which saves time and effort when compared to doing it on the more advanced stages of product development, e.g., after releasing the product. 

The cost of bug fixing grows over time as fixing the improper or unexpected behavior of the working app gets more difficult. Therefore, I always insist on involving the whole development team in the requirements and design analysis, preferably even before the development starts. 

This is a crowdsourced article. Contributors are not necessarily affiliated with this website and their statements do not necessarily reflect the opinion of this website, other people, businesses, or other contributors.