Automation, or the process of getting a machine or computer to perform its task(s) without human intervention, has revolutionized the manufacturing industry over the past 50 years. Automated processes control the manufacture of computer chips, pick and place machines assemble circuit boards with a precision no human can achieve, robots assemble cars so fast that some car plants can produce a new car every 2 minutes.
Over the past decade, automation has become more widespread. Robotic Process Automation is now routinely used by big business to improve the efficiency of routine and repetitive tasks. Test automation is big business, allowing testing to be done at scales that would simply not be possible without it.
In many cases this automation is driven purely mechanically, for instance, using a cleverly-designed sequence of movements to allow a robot to pick up a part, rotate it into place and fasten it. More recently some level of intelligence has been added. For instance, allowing pick and place machines to “see” the parts they are picking so they can rotate them and ensure they are positioned correctly. In this blog, we will look at the different levels of automation both in the wider industry and in test automation.
What is Automation
The Merriam-Webster dictionary defines automation as either “the technique of making an apparatus, a process, or a system operate automatically” or “automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labor.” At heart, these definitions capture the idea that automation is about getting a machine to take on the role of a human in order to take over a task. Usually, this is to make the task faster and more efficient, but in some cases, it may be because the task is hazardous (e.g. removing contaminated nuclear waste for processing) or because human labor is relatively more expensive.
In the test industry, automation is about getting computers to perform QA testing automatically. This can range from a dumb Selenium script that is testing the front end of a website, through to testing a web app across multiple browser types, and even up to systems that generate tests automatically using machine learning. Here, the primary aim is to increase the number and range of tests that are able to be performed. This isn’t about removing humans from testing completely, but it is about making sure as much of the repetitive testing as possible is being done automatically, especially regression testing.
Models for automation
A large number of researchers have tried to define models to assess the level of automation a given process or system achieves. Here we will look at a couple of the more interesting ones. Generally, these models either consider things from the perspective of the human who is controlling the process or from the perspective of what the machine is actually doing.
One of the most famous machine-centered models is the Society of Automotive Engineers’ taxonomy for the levels of automation in self-driving vehicles. They define 6 levels of automation if you include level 0 which is no automation.
- Driver Assistance. Here the vehicle can only assist with things like cruise control and lane keeping.
- Partial Automation. Here the vehicle takes over more elements of steering and acceleration/braking, but only in tightly controlled circumstances. For instance automated lane changing or self-parking.
- Conditional Automation. Here the vehicle will not only control steering and acceleration/braking, it will also monitor the environment and warn the driver if they need to take back control. This is similar to the autopilot mode in Tesla vehicles.
- High Automation. Here the vehicle performs most of the driving tasks itself, so long as it stays within defined use cases like driving between two known locations. The human driver takes no active role at all but is still able to take back control.
- Full Automation. At this level, the vehicle performs all driving operations completely autonomously. There is now no human driver and, indeed, there are no human-operated controls in the vehicle.
In their 1999 paper A Model for Types and Levels of Human Interaction with Automation, Parasuraman, Sheridan, and Wickens identified the four major tasks that need to be completed in order to take any action. These are information acquisition or sensing; information analysis; decision and action selection; and action implementation. Endsley and Kaber present a model based on whether a human or computer performs these 4 tasks.
Similarly, in his 1980 paper, Computer Control and Human Alienation, Sheridan looked at the 10 stages of automation when it comes to decision making:
- Human considers alternatives, makes and implements decision.
- Computer offers a set of alternatives which human may ignore in making decision.
- Computer offers a restricted set of alternatives, and human decides which to implement.
- Computer offers a restricted set of alternatives and suggests one, but human still makes and implements final decision.
- Computer offers a restricted set of alternatives and suggests one, which it will implement if human approve.
- Computer makes decision but gives human option to veto prior to implementation.
- Computer makes and implements decision, but must inform human after the fact.
- Computer makes and implements decision, and informs human only if asked to.
- Computer makes and implements decision, and informs human only if it feels this is warranted.
- Computer makes and implements decision if it feels it should, and informs human only if it feels this is warranted.
What all these models have in common is the idea that the process of automation involves making and acting on decisions.
A model for automation in testing
- Assistance. Here the human writes the test and has to maintain it to reflect any changes. AI is used simply to perform simple verification steps and to assist with visual checks on the frontend.
- Partial Automation. Here a human still has to write the tests and monitor changes, but the AI is able to assist with verifying changes.
- Conditional Automation. Here the human writes the test, but the AI verifies each change and makes any updates needed.
- High Automation. Here the AI takes on the role of writing the test, but the human guides the process and defines what the test should be doing.
- Full Automation. Here the AI is completely responsible for writing and maintaining the tests without any human guidance.
Applying this model to Functionize
Let’s look at how Functionize’s autonomous tools fit into this new model.
- Assistance. A core part of Functionize’s tests is the screenshots taken on each page highlighting any changes from the expected test outcome. This is a typical level 1 automation function, since it still needs human input at all stages.
- Partial Automation. Functionize’s Autonomous Event Analysis, or AEA™, engine includes Root Cause Analysis which is able to assist you with finding the likely change that caused a test to fail. This is a level 2 function, since the AI is now performing detailed analysis. However, a human is still making all decisions.
- Conditional Automation. Part of the AEA™ engine is the one-click update function, where the AI works out the most likely updates that are needed, tests them to see which works best and presents this to you. This is level 3 automation since the AI is now just asking the human to verify its choice of update.
- High Automation. One of our coolest new features is NLP (natural language processing) test generation. Here you simply provide a test plan written in plain English and the AI takes this and converts it to a full test script, ready to run on our test cloud. Another feature is Self-healing tests, where tests autonomously cope with changes in UI design that lead to CSS selector or Xpath changes. Both these are level 4 automation since the AI is now making decisions for itself based on previous guidance from a human.
- Full Automation. The latest innovation from Functionize is our self-defined tests. By simply tagging your code, you can instruct our system to log all customer journeys through your frontend. These are then used to create new test cases and to identify any missing test coverage. This is about as close to level 5 full automation as it is currently possible to get, and it is the basis for the automated canary testing approach, which our CEO, Tamas Cser, presented at UCAAT this month.
So according to this model, Functionize is already close to achieving full automation. However, this model is a bit narrow and fails to consider some elements of testing automation. For instance, test automation isn’t just about writing and maintaining the tests. It is also about how the tests are run and how the results are analyzed. Hence, we are not resting on our laurels just yet! One example where we’re pushing the bounds even further is our autonomous template recognition that is able to create tests that work without a DOM or browser. Another is with our scenario testing where a bot visits a URL and uses a predefined scenario (for instance, purchase an item and pay for it) in order to generate test cases to run on that site.
There are numerous models for classifying levels of automation. In this blog, we looked at just a small selection of these. As we saw, these models can either look at automation from the human viewpoint or from the machine viewpoint. One of the most widely-cited models is the SAE taxonomy for self-driving vehicles. We saw how our friends at Applitools have adapted this model for test automation. According to their model, Functionize is already very close to reaching full automation. However, we feel that there are still improvements to be made, and our engineers are constantly striving to improve our software and advance the state of the art test automation!