BLOG

Cultivating the Right Mindset for Successful Test Automation

Test automation has an indisputable number of benefits, but most automation initiatives head towards failure because the team doesn’t prepare or plan. Building a test automation strategy that will effectively support the needs of your team and business goals can be quite challenging. For many teams, traceability issues, cultural inhibitors, and scripting language skill sets are only some of the tall hurdles that block the way forward.

“The problem is not that testing is the bottleneck. The problem is that you don’t know what’s in the bottle. That’s a problem that testing addresses.“ — Michael Bolton

Software is becoming increasingly complex, and there are many tools that promise to help you keep pace with this complexity. The future of testing automation is being built upon a foundation of computer vision, machine learning, and self-diagnostic/self-healing frameworks. Yes, indeed, tools can be immensely helpful—in the right hands. Equally important is how a testing culture continues to pursue and implement best practices that align with modern technologies.

Scripting requires experience and skills

For any product development team that wants to pursue conventional testing automation, it will bog down if its QA teams do not acquire and maintain coding skills for writing automation scripts—which entails scripting frameworks and languages. Within most scripting languages you’ll find at least one testing framework which requires additional expertise (such as pytest for Python and TestNG for Java). While this affords flexibility and opportunity, the additional complexity must be handled properly if a team hopes to achieve success in its test automation strategy.

Ideally, a testing team should include some members who take interest in learning new skills and then convey those skills to the remainder of the team. Programming skills can be especially useful. Some testers will have the enthusiasm and aptitude to learn the fundamentals of a general use language such as Python, which is applicable across many scripting languages. Another benefit is that testers who become proficient in coding practices can communicate much more effectively and fluidly with developers. It’s important to realize that new solutions are available which can largely replace scripting in your testing environments. Functionize offers an entirely different—yet highly effective—alternative to scripting.

The problem with tossing it over the wall

For many companies, any testing that is transferred to an entirely separate team is likely to be done manually. Many attempts to automate an over-the-wall environment often boil down to end-to-end testing that is quite cumbersome to maintain. Because end-to-end tests require an environment that closely reflects the environments of the end users, it isn’t practicable to isolate specific features and components. Testers will naturally return to manual testing to avoid constant updates to brittle end-to-end test code that breaks with most feature changes.

An isolated tester rarely has the capability or gets insights that could help with testing deeper within the software. Unless the testers have coding skills or working in pair testing teams, they don’t get unit testing experience—but this is where much of the effective verification happens. Unit tests isolate the small elements of a software system and verify the correct function of all those parts. To test effectively in this capacity, it’s necessary to have a solid grasp of the code itself. Typically, it’s the developers who do this.

Maintaining responsibility

Here’s another dynamic to testing culture: when testing responsibility is diverted away from developers, they may lose incentive for taking full responsibility to ensure that the software actually works. This can erode trust, especially if there is a rush to implement new features. There will be a lengthening queue of features that await verification, and days or weeks may pass before the testers get an opportunity to examine the code and give some sort of critical feedback. A typical evolution of this testing separation and deferment will likely lead to additional ranks of testers that verify and re-verify until everything seems OK, and the entire fragile arrangement can come to a dead stop.

Minimizing handoff

Reducing the extent of handoff is essential to testing efficacy. A single team must take the responsibility for testing, supporting, and delivering systems. Members of the testing team can work closely together with developers. A team member can assume the role of both tester and developer to cross-verify the work products and minimize bias. Instead of developers performing some unit tests and then handing off the remaining testing work to a separate test team, the entire team can collaborate and decide together what type of testing is most suitable to a specific product development scenario—unit testing, exploratory testing, or automatic end-to-end testing.

It’s best for a testing team to narrow its focus to the release of a small set of related features at a time—with everyone on the team working together to enable seamless development, testing, and release of a single feature set. Instead of merely accepting the inefficiency and queuing of throwing it over the wall, this single team can easily maintain focus on a considerably smaller feature set that will release within a two-week duration. Narrowing the focus this way also contributes to higher productivity for reasons that go well beyond internal testing. Delivering smaller sets of features to customers should result in much quicker customer feedback. Testing and staging environments are easily configurable —and reusable—for hosting a smaller set of changes. This tightens the focus of external customer reviews to be shorter in duration, higher in clarity, and very specific.

Coming to see the value of automated testing

A team that takes on the responsibility for developing, testing, and supporting a system might take awhile to realize that comprehensive manual testing will be woefully insufficient as the software grows in complexity. Ideally, it is management should take the initiative and ensure that they learn about the benefits of automation. Without any guidance, automated testing may take years to happen organically. When the team does realize the need for it, many years may be necessary to reach a level of proficiency. This is easily avoidable by finding the right expertise and the right toolsets to clearly demonstrate the value proposition.

The only way to tangibly demonstrate the value of testing automation is to configure and implement it. The team members must get their hands dirty to acquire the confidence in the work for which they will actually be responsible. A highly effective way to accomplish this is to provide the opportunity for the team to get exposure, perform the techniques, and then apply them immediately to real software that is challenging to test. The only way to achieve an effective, enduring transformation to testing automation is to make it real by solidifying the practice of it.

Direct experience leads to adoption

Adoption of test automation is much more probable when team members actually realize and experience the following:

  • Test-driven development improves the speed and quality of software development.
  • Confidence increases through hands-on implementation automated testing of complex systems.
  • Some automation tools can greatly minimize the burden of test automation.
  • Solid test automation readily supports the addition of new application features and the simplifying the testing of existing features.
  • Automated tests can be leveraged for analyzing the root-cause of existing issues, automatically fix many of those issues, and minimize recurrence of such issues.
  • Automated tests can easily handle complex testing scenarios that are impracticable to test manually, such as integration with real-time, externals systems.
  • Wasteful testing. Some testing scenarios are not productively automatable. Learning what types of testing should remain manual or exploratory will help clarify the value of automated testing in all the other scenarios.

Impose few mandates

If a development team manager imposes mandates, it’s likely to result in the generation of additional tests. But, there is no assurance that those additional tests will be of any use. Tests that do not add value actually waste effort and increase the number of assets to maintain. Over time, useless tests can cause confusion, since tests eventually become rather inflexible assessments of how the software or system should function.

It is best to avoid these types of mandates:

  • A specific amount of code coverage, which is the measure of all the code that is covered by test cases. Coverage of 80% means that 20% of the code never executes when the test suite is run. Mandates for code coverage lead to rushed, wasteful tests.
  • While code coverage may be a good approach for finding untested areas, it doesn’t assure any particular level of quality. Test case efficacy isn’t measurable by enforcing code coverage to a particular extent. Also, there will always be functional areas that are not worth testing. So, don’t mandate complete coverage.
  • Mandating that tests should be written before code is ready—which is test-driven development. Yes, TDD is an excellent practice, but should not be mandated universally. Some software features/functions are too challenging or do not benefit from TDD. It’s much better to let team members use good judgment and apply TDD where it’s sensible to do so.
  • Mandating a minimum number of tests, which does nothing for testing efficacy. Keep in mind that it is often quite beneficial to eliminate wasteful tests.

Such mandates, imposed on testers that are typically overburdened, will negatively affect the maintainability of the system and produce a number of useless tests. Whenever possible, share the responsibility for deciding how and what to test, clearly explaining the value of specific test cases, and solicit feedback.

Be strategic, not tactical

While automation tools have been in existence for many years, many teams have struggled to implement a truly successful comprehensive testing automation apparatus. Success requires deliberate and careful planning that has full support from company management. Adequate, enthusiastic resources are a must. It is important to view the automation efforts as a critical line of investment—with clear priorities and solid process definition. Measure progress throughout the initiative, with tangible metrics that demonstrate that goals are achieved. If you persist—and properly cultivate and nurture the effort—it is probable that your automation infrastructure will mature and expand into a system that is scalable, robust, and maintainable.