Poor requirements capture
To start with, I would be inclined to believe that one would be interested in understanding which activities are the most time consuming, with the ultimate goal of improving efficiency and effectiveness – which in theory translates to improved agility. With that in mind, the crux of the matter would not be to reduce the time spent on useful tasks but rather the minimization of wasted time.
I’ve been involved with system testing at different levels and at different steps of the development life-cycle throughout my career. I’ve been a developer, calibrator, tester and I’ve also been responsible for integrating and managing all of these activities.
In my personal experience time wastage almost always inevitably boils down to poor requirements capture and definition at the start of the project or program, i.e., inadequate “left-shifting” and therefore not enough time and resources spent upfront to ensure that the goals are clearly set, well documented and communicated. Additionally, it is very common that not enough is invested in use-case scenario analysis.
A combination of these shortcomings then sets off a ripple effect. Development activities become based on dubious requirements, the verification level testing might even not flag any issues because the results would match up the requirements at that given system level. Eventually, the cracks are likely to only show up at system level validation stage testing. Even worse, they may show up after delivery/release! (This is particularly tragic in a scenario where the customer is the “public.”)
In parallel with what I’ve covered above, when the requirements are poorly captured, defined, and communicated, there is an increased tendency of scope change and/or scope creep. The result of this, amongst other things, is uncertainty surrounding the test procedures themselves throughout the life-cycle. In turn, this can create tension between developers and testers as the expected system outcomes become increasingly unclear and subjective, or “open to interpretation.”
This last point I’ve made somewhat ties up with my next one. The testing procedures should be based on principles. What I mean is that both the ‘how’ and the ‘why’ of the test in question need to be understood. This guarantees that the test procedures can be adapted effectively and efficiently in light of changing or new system requirements. The change in requirements cannot always be completely avoided and is not always due to bad practice – the system context changes during its own life cycle, in some cases even during development phases. Good systems engineering practices can help reduce this eventuality, but that is another story for another day.