16 things that testers wished they’d learned earlier

Experience is the best teacher, they say. Sometimes, you get a crash course.

“If I knew then what I knew now…” is a common refrain in any profession. But as we get experience, we learn which things really matter.

That’s certainly true for anyone who has tested software, or spent a week trying to figure out the source of a problem! Sometimes it helps to share our personal stories, and perhaps to speed up others learning curve. Because nobody wants a reason to buy a t-shirt that says, “Oh no! Not another learning experience!”

For me, personally: I had to learn to prioritize defects, and to recognize that not every bug requires urgent attention. That lesson came to me when I started working in QA at Lotus Development in the 1980s, back when Lotus 1-2-3 was the “killer app.” During my informal orientation, I was told about the bug-reporting process that came through the company’s phone tech support.

One example was a known bug in the spreadsheet’s @IF formulas. It absolutely crashed the PC if you had 25 nested items in the IF statement… something like that, anyway. But, if I recall correctly, you could have only 128 characters in a Lotus 1-2-3 cell, which means that only a teensy number of people could possibly encounter that bug. It was easy to work around the problem, because an @IF statement could almost always be broken up into separate formulae. The takeaway for Lotus developers: It wasn’t worth the time for the company to fix that problem, when it could work on new features instead.

As an aside: Lotus paid a lot of attention to customer care – though not in a formal manner. After each phone call, the support person categorized the problem and its resolution. There was a whole class of support calls marked as DDT, for “Don’t Do That.” In other words, the user said, “When I do this, it does that.” And the support person would respond, “Don’t do that.” Another call-resolution code was UBD. It stood for “user brain dead.”

Anyhow, my “Aha!” isn’t necessarily representative. So I asked QA professionals: What did it take you entirely too long to learn about software testing?

Herewith, the top answers from around the web. Each “lesson” reflects an individual’s own experience and obviously may not match your own. However, several of these are sure to make you nod in agreement.

  • Every requirement document is (a) wrong and (b) incomplete.
  • Many (but not all) kinds of laxness are acceptable in tests that would be out of the question in deployed code. It’s better to write more tests and get higher coverage than to spend time following the strict coding rules that are appropriate for shipped code.
  • How to shape the code and interfaces such that the number of untested paths is small.
  • Never assume why something behaves the way it does.
  • Always ask questions as soon as they come up. The earlier in the dev process, the better. (Stated elsewhere as “Test early, test often.”)
  • If the product functions exactly as described in the user manual then you may have completed your engineering cycle. …Users only care if the product does what it is supposed to do, is reliable, lasts a good time, and may be serviced later.
  • For most data analysis (‘data science’) code, you’ll write a lot of one-off data wrangling functions – it’s the outcome rather than the code that you will reuse. Here, putting assertions (of pre- and postconditions) directly in the functions is generally better than a separate test suite. Faster to write, more visible, doesn’t require fixtures, and you don’t have to spend time dreaming up every way the input data could be malformed.
  • Test suites make it easier to confidently change code. Test fixtures (data and objects that exemplify common or important inputs or resources) make it a lot faster and nicer to write tests. Time spent on creating them, and making them easy to reuse, repays itself.
  • Do unit testing. Only test one small module at a time. Once that passes, do a unit test of the next small module that relates to the first module. By process of elimination a very large set of programs can be tested thoroughly this way.
  • Testing is never complete. There is always something that was not tested.
  • Tests are code and should be treated with the same care as code, rather than “just tests.”
  • Leave your emotions out of the process. Focus on the ultimate end goal of high quality software. Frame conversations in that way. “It’s not dev VS QA. It’s not dev VS Product. It’s a goal for us to release the highest quality software. We are a team. It’s not a battle against each other.”
  • Testing CLIs is super easy if you separate your core application from a class that handles arguments and invokes your core application.
  • If you find a bug in the testing environment, check production to see if the bug occurs there too, before you report the bug.
  • We don’t actually need to or want to test and fix everything. Time is an expensive resource. It’s all about risk vs impact (and cost) to the user and business. Once you understand that, you make better decisions.
  • Don’t look for bugs that won’t get fixed. “The webpage throws an unfriendly looking error when somebody signs up for the newsletter with emojis in the form? Yeah, nobody cares. The PM is sending that to the bottom of the backlog and it’s never getting out,” the tester explained. Sure, there is value in reporting the bug when you encounter it – at least it’s on-record, then. But, the tester adds, “The point is to not go looking for these type of bugs because there are always more important things to do; for you, the devs, the project manager, and the users.”

For more depth, you might enjoy a related recommended video: Sandi Metz discussing The Magic Tricks of Testing, such as “understand the difference between testing commands and testing queries.”

Naturally, there’s always more we can learn. For instance: The Top 10 Reasons Selenium Tests Fail.

What would you add? Tweet us @functionize to share the things it took you too long to learn.

Sign Up Today

The Functionize platform is powered by our Adaptive Event Analysis™ technology which incorporates self-learning algorithms and machine learning in a cloud-based solution.