How do you know if your testing team is doing a good job? These five guidelines are a good start.
It is a commonly held belief in the software industry that programmers must hate software quality assurance, for without QA there wouldn’t be as many bugs to fix. The reality, of course, is that most developers (at least the ones with a reasonable amount of emotional intelligence and self-preservation) see QA as an essential part of the development process. It is better to catch a bug early and fix it at leisure, rather than having it fall upon one from a great height at 3:30 in the morning on a Saturday. Ultimately, most developers appreciate working with a good testing team.
But what does it mean to have a good testing team? The definition has certainly changed during the 40 years (sigh) that I’ve been coding for food. In the most simplistic terms, QA needs to run sets of tests before a release to assure the application runs correctly. But that’s equivalent to saying that the job of a major league pitcher is to throw the ball to the catcher. The difference between working with a testing team and working with a really good testing team can make or break your product. What follows are a few of the traits that I’ve found in the best teams I’ve worked with.
At the moment, I’m working on aviation-related applications. In previous jobs, I’ve worked on financial applications, retail applications (sales tax, shudder), and other industrial domains. Each one has its own specific rules of the road – and QA testers must understand them.
Really good QA teams are composed of people who understand the problem space beyond the surface level. This allows the testers to apply context to the tests they develop. What is an in-bounds use case, and what would be invalid? For example, in aviation, you never need to deal with an altitude above around 51,000 feet, because no commercial or business jets have a service ceiling above that. Having a QA team that understands this can avoid bug reports down the road about a failure to handle higher altitude. At least, that is, until you get SpaceX as a customer….
I’ve seen QA teams that blindly treat the acceptance criteria in user stories as the be-all and end-all of their jobs. But acceptance criteria are written by product owners who, while they do have a deep understanding of the domain, typically aren’t that good at identifying edge cases and potential interactions with the rest of the product. That is the job of the test team, and it can only occur fruitfully when testers have a real understanding of how the product is used in real life.
It pretty much goes without saying that testers and developers need a healthy relationship, but a modern test team doesn’t exist as an island. In an enterprise application, the software may communicate with a server back-end, which uses microservices, which are deployed into Kubernetes clusters, and which pull data from a content delivery network (CDN). One of the testers’ major roles is to identify the root cause of a problem, or at least the neighborhood where the crime occurred. But there can be so many moving pieces in a business application that it would put an expensive Swiss watch to shame.
One pitfall in application development is what I call the “They who smelled it dealt it” syndrome. If a bug is discovered in a mobile client, the bug is lodged against the client, even though the defect might be a failure of a back-end microservice or a network failure. It then ends up being the developers’ job to provide a positive defense that they weren’t the root cause of the issue.
A good testing team has the tools and knowledge in place to help direct the bug to the appropriate group, handing it off to the correct test team to pursue it. The best QA teams reduce finger-pointing between groups and prevent developers from having to waste time. Of course, this is an ideal situation, and there will always be bugs that require developers to dig into the problem before diagnosing where the fault lies. But top QA teams try.
Test teams also often serve as the ambassadors between development and customer support. It’s a real plus when QA knows how to gather enough data from customers to diagnose problems, and to communicate solutions back through support in a way that customers can understand.
The term “full stack developer” gets bandied around a lot these days. There is an increasing understanding that developers don’t live in a vacuum, so it’s a requirement to know enough about DevOps, databases, security, and networking (among other topics) to be dangerous.
Similarly, QA test teams have to be well-armed across a variety of technologies. They need to know enough SQL to be able to populate test data for test cases or debugging. CI/CD tools like Jenkins have to be old friends that they can configure and maintain (albeit perhaps with help). Testers have to be familiar with the company’s development environments and testing tools, such as Postman and Selenium, so that they can develop API and UI tests. They should know how to set up proxies like Charles, and how to read JSON and XML data.
There can be a fine line between having enough knowledge to do all that, and having enough to be a developer. No one expects test teams to be able to sit down and code an app. The two roles can have very different personality types. But just as with developers, testers really need to know enough to be dangerous across a variety of topics.
Pre-release regression testing can be a labor-intensive manual process that sucks the entire team into a week or more of drudgery. It’s depressing. You would think that, in these enlightened days, the CI/CD mentality would have done away with the need, but in many organizations that goal remains aspirational.
However, a good test team should continually push towards that goal. Automating tests is a win for everyone. It provides accountability for developers by highlighting regressions as soon as they are committed. It allows the test team to work on developing new and diabolical tests, rather than continually having to execute old ones. And most importantly, it allows the product to always stay close to shipping, the cornerstone tenet of modern Agile development.
To do this, test teams have to be highly proficient with their available testing tools for the current platforms they cover, as well as all that full-stack knowledge so that they can provision environments for running test sets. They should also be self-motivated towards finding new tools and techniques that improve their ability to automate.
All of this is a big request for any one person, of course. In a big enough test team, it’s probably more realistic that the skills are spread among the team, with individual specializations. Some testers may be naturally more introverted, and focus on tools and technologies, while others are better at outreach to other teams and customer support. It’s more important that the team as a whole encompasses the gamut of skills, and that team members are allowed to work to their strengths. The idea of testers as interchangeable cogs is, in my opinion, both unrealistic and unfair.
When test teams embody the virtues listed above, it lets the entire team shine. Regressions don’t languish undiscovered, lines of communication are efficient, developers can focus on development, and releases can be delivered with high confidence to the users.
Ultimately, who is responsible for software quality in the organization? In some cases, enterprises are adding another role to the C-suite: the Chief Quality Officer. Our white paper explains who’s doing it, and why.