In his book The Road Ahead from the late 1990s, Bill Gates famously wrote that “We always overestimate the change that will occur in the next two years, and underestimate the change that will occur in the next ten.” This has certainly proven to be true in regard to the Internet of Things (IoT). Even though we now live squarely in an age of IoT ubiquity, relatively people have yet to appreciate its full impact. Many manufacturers continue to grapple with the challenges of IoT device/system testing.
IoT continues to accelerate across virtually all industries. As markets become more dynamic and new technologies continue to emerge, more companies are developing plans to build IoT products to expand their interconnectable ecosystems. Indeed, IoT is expanding rapidly throughout the world—with many initiatives from huge enterprises such as GE, Johnson and Johnson, Microsoft, Tata, and a host of next-gen startups. To manage complex architecture, massive volumes, and varieties of data, these companies must develop and execute on solid IoT testing strategies to ensure the reliability of the devices that they manufacture.
The Internet of Things is coming to dominate the world—both in concept and in practice—by altering the way that people work together. There is now a rapid evolution of IoT ideas that are affecting day-to-day activities and combining to form elaborate digital ecosystems.
The IoT consists of three key elements: things, communication, and computing.
The primary goal of IoT is to enable device users to facilitate tasks and make better-informed decisions. We are now living in the reality in which IoT is supporting smart cities, smart metering, smart entertainment, smart security smart utility management, smart metering, smart safety measures, and smart retail.
Connectable IoT devices—together with their apps—pose daunting interoperability challenges for both manufacturers and consumers. Testing such devices—such as refrigerators that can order food from the supermarket, or highly-automated cars that provide advanced convenience and safety features—has become perhaps the most serious challenges for device manufacturers and systems integrators.
Unquestionably, effective testing is critical to product success. What is the best approach? Let’s have a quick look at some of the prominent considerations for IoT quality assurance teams. We also mention some tips for addressing those challenges.
Every IoT device has hardware and software—which both work together to integrate with other IoT devices. There are so many types of device hardware and software, and this is compounded by various versions of firmware and operating system. For most manufacturers, this means it’s entirely impracticable to test all possible combinations.
A reliable approach is to define the most sensible subset of combinations that a team can effectively test. It’s also vital to continue collecting information from customers to understand which hardware and software versions are in mainstream usage.
There are many different IoT communications protocols that link with controllers—and that link one device directly with another. Commonly, these include the Extensible Messaging and Presence Protocol (XMPP), Message Queuing Telemetry Transport (MQTT), and Constrained Application Protocol (CoAP). Each has its own specialization, and each carries with it some advantages as well as disadvantages. MQTT is the most popular since it performs well in high latency and low bandwidth environments. Typically, it’s necessary to set an API layer atop the transport protocol to facilitate software interaction with the device. Testers can also interact with the protocols and APIs for manual or automated testing. Of course, testing tools must be configurable with these protocols and APIs.
It is a source of great concern to many InfoSec professionals that about 70% of IoT devices remain significantly vulnerable to security issues. It is therefore critical that IoT test case design should focus on detecting any security holes. A key area for device testers is verification of the password policy to ensure that minimum requirements are built into the device—and that they are rigorously enforced. At a minimum, a password change should be mandatory on initial user access, and any test automation efforts should take this into account.
The ever-growing diversity of IoT devices, applications, APIs, and protocols requires that any testing team should strive to have robust and incisive testing capabilities. Testers must have clear guidance that is supported by a strong test strategy, obtain a good understanding of the architecture, and ensure that test subjects are always configured with the correct versions. If a system depends on 3rd-party services, tests are liable to fail if there are changes to those services. Well-built automated tests in a CI/CD pipeline will detect this very quickly.
Though it might be surprising to some of our readers, a major segment of the IoT market is to be found in the highly connectable, smart automobile. The automotive sector now offers an impressive array of connected car apps and infotainment services. This has brought about a new way of life for many, in which passengers can safely commute which interacting with the world around them—in a variety of ways.
Automotive equipment manufacturers continue to face the challenge of providing audio video navigation (AVN) systems that delight their customers. Vehicle occupants now expect to interact with new AVN/infotainment systems that are high-quality, feature-rich, and intuitive. Also, for any new AVN system to become viable in the marketplace, it must easily integrate with mobile device features and provide seamless connectivity.
Owing to its extensive and elaborate functionality, a modern AVN infotainment system is now the most complex system in the vehicle. Its architecture envelops a wide range of I/O types, a modern infotainment system must support multiple buses, radio frequencies, and wireless connectivity. The AVN must also process huge data volumes and expose several MMI interfaces such as touch screen, multiple user displays, audio I/O, and various switch interfaces.
Managing all of this complexity—together with the incessant product innovation and risk of product defects—automotive manufacturers and equipment suppliers have come to these two critical realizations:
Achieving thorough and proper testing of an AVN system requires that the team find a way to mitigate the most difficult challenges upfront. Otherwise, the team will encounter serious catastrophe downstream in the product development pipeline. Considering the technical complexities of an automotive AVN system, these are the top concerns of AVN suppliers and integrators:
Though it is an extremely daunting challenge to address all of these concerns successfully, an automotive AVN testing solution must take these and other considerations into account. The Functionize testing platform is designed with these and many other platform challenges in mind. In fact, we are executing on a very successful AVN automated testing program with a major automobile manufacturer. Functionize is testing their center stack infotainment system for many of their automobiles. This is a significant—yet successful—departure from the testing done by our other customers. For the AVN project, there is no DOM. It is a completely browserless, a fully-customized, template recognition solution.
An area of particular concern to human testers is the visual inspection and verification of display screen feedback. For example, when a driver or passenger selects a different language that is verbose and requires additional space for legible display, most automation platforms fail to handle this correctly. The additional text may overlap another control or button and make it difficult to read properly.
Using enhanced computer vision algorithms, Functionize captures screenshots at every test case step for analysis by machine learning algorithms. Managed by a highly autonomous apparatus, many test cases will repeat and permutate hundreds of times. Even after any display elements have been relocated or altered. This is made possible through a proprietary AI subsystem known as Adaptive Event Analysis—a breakthrough technology that clearly distinguishes pages and page-elements that are new or have been relocated during the most recent development cycle.
Template recognition is at the heart of the Functionize testing automation of the center stack in automobiles. This entails automatic identification of all interactive graphical elements in a software user interface—including icons, images, text, backgrounds, colors, and any other UI element. As each element is recognized, Functionize trains its machine learning models to identify and track all UI changes with each subsequent product lifecycle iteration.
Far better than what is possible with manual testing, the template recognition engine enables rapid detection of even the slightest variation or anomaly. With each successive UI test run, the Functionize ML apparatus ingests all new UI data and automatically improves its understanding of the UI specification. This automatic ML model-training has proven to fully capture all dynamically changing content, icon repositioning or alteration, textual changes, and various additional modifications.
A vehicle may have multiple user interfaces, such as the human-machine interface (HMI) device for input and display located on the center console. Another common HMI is the instrument cluster display that is controllable by steering wheel switches. Commonly, there is also an HMI to control features in the rear of the vehicle. System control design must implement the command hierarchy throughout the vehicle and at each HMI. It should also easily resolve any contradictory instructions or inputs. To test such a system, the test designs must cover all combinations of each normal use case. Test architecture must also cover undefined or inconsistent inputs. Permutations must be done with respect to timing, order, and cadence to ensure system resilience. The testing systems must also track and I/O and handle pass/fail resolution. Each exception and failure must capture all possible details so that the tester can readily determine the state of each participating device.
The Functionize testing platform can readily accommodate extensively complex systems such as automotive AVN systems—which entail a wide variety of new interfaces, pattern generation from test primitives, support for concurrent testing, image capture, and environment simulation. With Functionize, users can develop function-level or feature-level test cases and automate test execution. This can be done more rapidly than was ever possible before. Extensive stress, stability, and endurance tests can be built from primitives that will expose any remaining defects or inefficiencies. And, this can be done much earlier in the development cycle, far prior to in-vehicle testing.
Here’s something to think about: How does your testing automation solution handle all of this? Does it have any capability to test IVI or AVN systems—or systems of similarly high complexity? Functionize has been built from the ground up to accommodate all of this complexity, and so much more.
As we wrap up this article, we leave you to consider these Functionize benefits:
For more information on how Functionize is developing expertise in automotive system testing, we invite you to read our recent article that showcases Functionize test automation for automobile infotainment systems.