Ad hoc (adjective) made or happening only for a particular purpose or need, not planned before it happens. (Cambridge Dictionary online)
What is ad hoc testing?Generally, you should carefully plan and execute all your tests. There are good reasons for this. Tests need to be thought through to ensure they are touching all the moving parts of the code. They need to be executed in the correct sequence, with the correct starting conditions. All steps must be documented and the results are written down. However, formal testing such as this often misses important bugs. This is because, in general, you are only testing the expected user journeys through your application. But users are all too good at doing some unexpected things. This is why users are so good at finding bug in your production code. So, what do you do when you can’t rigorously plan your tests? This is when ad hoc testing is needed. You use ad hoc testing to try and find ways to break your application. It is almost the exact opposite of the usual strictly planned and executed testing you are used to. Some people use the term “monkey testing” since ad hoc testing is like monkeys randomly pressing buttons. However, this is an unfair characterization. While ad hoc testing is unplanned, it is focussed. A skilled ad hoc tester will know ways to trip systems up and will try all these tricks to cause the system to fail.
Why is ad hoc testing an important skill?It might seem like ad hoc testing is just a waste of time. Why would you ask a tester to go out of their way to break your system? Or if you receive a bug report, why do you need to do anything other than verify it was correct? Once upon a time, there was a QA tester working on a taxi-booking app in London. Tony was a skilled tester with years of experience. This taxi app was passing all its QA tests with flying colors. But taxi drivers testing the beta version were triggering an occasional bug that caused the app to crash. The app was designed to recover elegantly from crashes, so all the drivers were oblivious. But clearly there was an issue. Tony decided he would try some ad hoc testing to see if he could replicate the bug.
How to do ad hoc testingIn the real-life story above, Tony began by trying to replicate conditions where a taxi driver would use the app. He had an instinct that the problem was related to the GPS signal (a frequent problem in London’s urban canyons). Added to that was the fact the taxis were constantly moving, meaning they were changing cellular transmitter frequently, and sometimes losing data completely. To replicate these conditions, Tony started testing the app during his commute to and from work on the bus. After a few journeys, he was able to trigger the bug. But as this was ad hoc testing, that was only of limited use. Tony knew now how to trigger the bug, but he still didn’t know the root cause. So, he turned to the developers for help. He asked them to instrument the GPS and log its location constantly. Tony repeated the testing with this special build of the app. This time, when the bug recurred, he was able to see the GPS track. At many points the phone lost GPS position. Usually, this was fine, but, occasionally, it would pick a really arbitrary position in the middle of the river Thames. This was causing the application to panic as it made no sense, and it would crash and restart. This data allowed the developers to fix the bug.
Mistakes to avoidYou need to plan your unplanned testing. It really isn’t about randomly pushing buttons to see what happens. Rather, an ad hoc tester is trying to think of all the unlikely things that a user might do. Or using their knowledge of the app architecture to try and trigger exceptions and unknown states.
Key guidelines for ad hoc testingHere are some key guidelines to follow in order to avoid mistakes.
- Put yourself in your user’s shoes. One of the key skills in ad hoc testing is understanding how a user might accidentally (or maliciously) misuse an application. This comes with experience. Knowing how existing bugs were triggered is vital. It also pays to understand UX/UI principles.
- Follow your instincts. Many of the best ad hoc testers develop a 6th sense for the sort of things that may trigger a failure. You may notice that the application “stutters” when you perform a certain sequence of actions. It may respond a bit slower or briefly become unresponsive.
- Try to think of external factors. Try to think of things that may influence how the application behaves. For instance, all the bad things that might happen when the connection switches between WiFi and cellular data. Or what happens when the backend has a failure.
- Use logs and instrumentation where possible. As the example above shows, sometimes you need to look at the actual data in an app to identify a problem. Other times, logs will allow you to spot unexpected responses. And instrumenting the UI may reveal things like potential race conditions.
- Be rigorous. You need to follow any test to the end. You need to try and test all possible combinations of actions to make sure you trigger any bugs.
- Keep track of what you did. Closely related to the above point is the need to record and track exactly what you did. This will be essential to identify the precise steps to reproduce any bug you trigger.
Challenges for ad hoc testingNormally, you need skilled manual testers for ad hoc testing. Ad hoc testing finds bugs that no other approach can. But manual ad hoc testing is time-consuming and slow. You need to be thorough, meaning testing as many combinations of actions as possible. For instance adding and deleting items from a cart multiple times and in different orders. This can rapidly become quite a mechanical process. When your ad hoc testing starts to become repetitive, you might be better off creating automated tests. Systems like Functionize Architect make this really easy to do. Another important challenge is recreating real-world conditions. As Tony’s story above shows, the real world can pose challenges that don’t exist in normal test environments. There are several key differences:
- The backend is usually different. Even when the setup is identical, your production backend has dirty state.
- The operating environment is different. This is really true for mobile apps. However, it is increasingly true for all apps with the rise of responsive UIs.
- Your test devices are probably very different to real user devices. E.g. you probably have maintained them, performed security updates, rebooted them frequently, etc.