The mobile testing gotchas you need to know about

Testing gotchas on mobile devices has its own set of perils. For how many of these are you prepared?

Testing gotchas on mobile devices has its own set of perils. For how many of these are you prepared?

May 22, 2020
Tamas Cser

Elevate Your Testing Career to a New Level with a Free, Self-Paced Functionize Intelligent Certification

Learn more
Testing gotchas on mobile devices has its own set of perils. For how many of these are you prepared?
Testing applications on mobile devices has its own set of perils. For how many of these are you prepared?

So, you say, you’ve been shipping your web or desktop application for years, and you’re finally going to roll out your first mobile version? No problem. You’ve already got a robust testing strategy in place, so you can just carry on as usual, adding “mobile platforms” to the list of environments you check. Right? 

Not so fast! While a lot of your testing methodology can map well onto mobile applications, a few challenges are unique to mobile development.

It’s the bandwidth, stupid!

Once, when cyanobacteria ruled the Earth, people worried about how much bandwidth their web pages consumed. Every space was sucked out of JavaScript files, and images were compressed into sometimes pixelated nightmares. In modern times, broadband connectivity is a given for desktop applications and websites, and increasingly mobile devices are entering the age of 4G and 5G. But for several reasons, QA testers need to keep an eye on mobile application bandwidth usages.

First, it’s not at all hard to find pockets of truly awful connectivity even near densely populated areas. Once you get out into the wilderness between metropolitan islands, you can find yourself with a single flickering bar of service. 

Developers need to design their applications to fail gracefully or to adapt to poor connectivity, and tests need to exercise those capabilities thoroughly. How does the application react if you drop out of cell service in the middle of a crucial download? If it detects poor speeds, does the software back down to downloading smaller versions of assets? Some devices, such as iPhones and iPads, have the capability to simulate poor connectivity, but you might want to invest in a cell simulator that can let you create awful conditions on the fly. 

Second, even with bandwidth availability soaring, the cost of shipping around those bytes can still be expensive. During testing, you should monitor how much and how often data is downloaded. The software I write is used by airline pilots, whose Wi-Fi is charged pretty much by the byte while in the air. Developers can get sloppy about keeping downloads lean, so often the testing staff needs to keep them honest. A tool such as Charles or Fiddler can let you do man-in-the-middle monitoring of how much bandwidth your application consumes.

Become a real estate tycoon

Along with bandwidth, screen real estate has become somewhat of a plentiful resource on the web. Everyone seems to have 32” 4K monitors these days, and often the problem is finding stuff to put in all that space, rather than fitting everything in.

Mobile applications are like returning to the 640x480 VGA days. When you test software on mobile devices make sure that the application behaves well under the most restrictive of screen layouts. That goes for “responsive” web applications as well. Does it look good in landscape? Suppose you use the accessibility functionality of the operating system to crank the font sizes way up? Now turn it to portrait. Now (if you’re on an iPad at least) do a multitask split screen. Does it suddenly display only a few letters? 

Unlike a desktop application, a mobile application may have its orientation, percentage of the screen, or font sizing changed on the fly in the middle of running. During testing, you need to test all of it, or work with the product owners and dev team to define the acceptable combinations and make sure the rest are locked out.

Version madness

On the web, you can be relatively sure that you’re dealing with a semi-recent web browser, although I’m sure there’s still plenty of Windows XP systems running IE version 6. In any case, you can draw a line in the sand, declare, “We support Chrome versions X or greater,” etc., and restrict your testing to a reasonable set of platforms.

If you’re dealing with a native mobile application, you can find yourself in the wild west. It’s not so bad on iOS, where current OS support is available for devices several years old, but in the Android world, the majority of currently active devices are running versions four or five years old. 

This presents a huge challenge for testing. In my group, we’re lucky enough to only deliver on iPads, and we set a policy of only supporting the currently shipping version of iOS and one major release back. But if you are trying to be more inclusive or are stuck supporting the much more heterogeneous Android ecology, you have to do a lot of testing across multiple devices and OS versions. 

You can’t even get away with testing on a lowest common denominator release. Your dev team is probably conditionally taking advantage of new OS features, such as detecting which OS version the device is running and using more modern features when available. As a result, you have to test against pretty much every version of the OS you need to support. It’s no wonder that many mobile testing labs bear a striking resemblance to a mobile phone store.

Stay on (touch) target!

Web and PC applications don’t pay a huge amount of attention to how large the buttons and other UI elements are, because mice are pretty precise pointing devices. However, once you move to the mobile arena, you’re trading in your mouse for the human finger, which is a much more blunt pointer. Combine that with use in a moving vehicle, and a tiny touch target can become unusable.

As part of testing, you need to assess how easy it is to select UI elements, and also how easily you can fat finger the dire delete-all button located right next to the paste button, for example. Some buttons may be so painful to accidentally trigger than the developers may need to add countdowns or confirmations to avoid disaster. 

Largely, this should be hashed out between your user interface, user experience, and dev teams, but it’s the testing team’s job to make sure that nothing slips through the cracks. Most mobile OS makers provide a UI guide that specifies things like the minimum size that touch targets should occupy, and it’s a good idea to follow them. As a side bonus, targets that are large enough tend to pass accessibility testing as well, although there are a slew of other hurdles to jump to clear that mission.

Ship It!

The good news is that if you adapt to the unique challenges of mobile application testing, you can get a reward that is rare in the software world. One day, you might be on a subway, and notice that the person next to you is using and enjoying an app you helped shepherd out the door. Of course, if you don’t rise to the challenge, they might be cursing it instead, so test well!

From simple scripts to recorders, and systems that use artificial intelligence, test automation allows testing to be done at scales that were impossible just a few years ago. This is driving the creation of more complex applications and enabling new development models such as CI/CD. Get the basics from our white paper.

by James Turner

James Turner is a developer with over 40 years of experience spanning technologies from LISP Machines and Z80 assembly language to Docker, Swift, and Java. He is the author of two books on Java development and one on iOS enterprise development. He lives in Southern New Hampshire along with his wife and son, and is currently developing mobile applications for a Fortune 50 company.