A handy software defect tracking checklist

Every QA team does software defect tracking recording. But which data points are really worth tracking? These four guidelines might help.

Every QA team does software defect tracking recording. But which data points are really worth tracking? These four guidelines might help.

July 17, 2020
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Every QA team does software defect tracking recording. But which data points are really worth tracking? These four guidelines might help.

Every software QA team keeps records about the bugs it finds. But which data points are really worthwhile to track? These four guidelines might help.

Soon after we made the offer on our new house, we arranged with the realtor to get back inside with a tape measure. We wanted to plan where to put furniture in order to ease decisions on move-in day; would the bookcases fit along that wall? Naturally, we completely forgot to take several measurements (such as the distance from the kitchen entrance to the dining room door) or marked them incompletely (was the master walk-in closet 6 feet wide or 6 feet long?). While we did well, overall, some of the house's finer details came as a complete surprise. Not all of them were pleasant ones. 

So it is with software testing. You only find the bugs you look for. When you record and track any project, you can only detect trends in the things you measure. 

Collecting lots of information "just in case" isn't a solution, either. The more irrelevant information you require, the more the database fills up with junk, which makes people less willing to consult it. So it makes sense to identify the information that is truly worthwhile to track – without particular regard to the development tools used for the purpose. 

One reason that the tool choice isn’t a major component in this discussion is that most applications do a good job. The people who create software for defect tracking understand testers’ and programmers’ needs, so the built-in functionality works well. 

For example, bug tracking tools provide the right set of fields for most people’s needs. Few testers need to add unique items to track besides the mundanely obvious (such as date reported, priority, and description), much less customize the database or reporting. Isn’t it nice to discover a set of developer tools with which the users are happy? 

However, there are some tips and techniques worth sharing.

Track how often a bug is reopened

Sometimes, it isn't the nature of a defect that's worth investigating as much as the process by which it occurs.

For example, a senior QA engineer I’ll call Darrell noticed a tendency for defects to regress, so he began to track how often a defect was re-opened. That let him measure the number of defects filed in a given period, and chart the defects that were opened once, twice and more than twice. 

Darrell discovered a clear trend for defects that were submitted more than once. Team A would fix defect #1; then Team B would fix defect #2, and regress defect #1. When defect #1 was re-opened, Team A would fix it and regress defect #2. Aha! It's a team problem, not a software problem! 

Because Darrell tracked the regressed defects, the company was able to discover what was going on. That led to a happy ending: Company management encouraged the two teams to work more closely. They also scaled back the project scope and added time to the schedule, so the developers could remove many dependencies and refactor the code. 

Whichever tool you use, put in place a filter mechanism to keep the same defect from being logged repeatedly. One obvious result of repeated defect reporting is that it skews the metrics. Another is that it requires a human to analyze the nature of the bugs – which surely is a waste of effort, when your time is better spent figuring out the areas that need to be redesigned or optimized.

Track the solution

When a defect is fixed, retested, and marked as “passed,” some companies capture a screenshot of the “successful” moment, and include that image as an attachment. Doing so provides supporting evidence that the fix actually worked.

Other “it’s all better now!” items to track:

  • The length of time it took the developers to fix an issue
  • How long it takes testers to retest issues
  • Defect reproducibility ratios: That is, how many times a tester can reproduce the defect before submitting it.

In each case, tracking the journey to the resolution gives insight into the expected turnaround time. If a defect is discovered later in the development lifecycle, there’s some history to suggest how long it’ll take the team to resolve the issue.

Choose the right values

Most developers and testers are well served by a tool’s predefined ranges for defect severity (a 1-5 or 1-10 scale). In some situations, however, it makes sense to define the values appropriate to the team and the project.

For instance, a process specialist named Daniel had problems managing consistency with the Status field in his team's defect tracking system. While "submitted," "ready for testing," and "closed/fixed" were clear, other values were subject to debates. What one person considers "open" or "assigned" may not agree with another's view. Different status states make reporting more difficult, or at least they obscures Management's understanding of what’s going on. 

As a result, it’s a good idea for the team to deliberately agree about the predefined values it uses and what they mean to participants. That's especially important for fields which can spark interdepartmental debate, such as "resolution" or "issue type" (because egos get involved when you call something a bug). 

Nor should one person call the shots. QA should set up a framework, and the project management team should customize the schema to suit each project's needs. Everyone should have a voice: business analysts, developers, testers. 

Two common fields can create problems. Most defect tracking tools have a Comment field, which often is used for information which should be tracked in other fields. Unfortunately, stuffing that data into an open-ended text box makes it impossible to track. 

Another problem you may encounter is the "Functional Area" field attached to a defect report. Many teams try to use this as a closed-list, but once a project is underway, the granularity is lost. For example, early in an application’s testing process, you might identify a defect’s functional area as "database," but discover that most of the problems should be listed as "SQL Server." Other projects start with an open list, permitting the user to select from an existing value or to create a new one. That's relevant enough for fixing defects (maybe even better, because of the granularity), but almost useless for metrics and reporting. 

But don't get too granular. For example, don't scale bug severity or priority on a 1-10 scale. Low/Medium/High is enough, without requiring someone to decide if this particular problem is a 7 or an 8 (because how can you know?). 

If your defect tracking tool permits it, consider adding role-based fields in order to reflect the varying information required by each development role. In its simplest form, a developer needs to know what to fix, a tester's concerned with the defects already reported, and the project manager wants to know the number of current critical defects. Consider whether it’d help to add fields that capture information for each type of project member, which might even include a single person. One tester told me about the fields he added to his defect-tracking application, including defect due date, defect status due date fields, planned-for-version, tested-in-version. That lets project participants zero-in on the information that helps them do their jobs. 

For the most part, any defect tracking system you choose can do what you need. But as with all utility software, these applications are only as good as the process you create for using the system. If you let the tool dictate your process, the software development process becomes messy and confusing. Make sure that your tool reflects your teams’ needs, and that it helps you answer the questions you (and your managers) tend to ask.

Good test cases are concise, targeted and easy to execute and maintain. Our white paper gives you practical ways to improve them.