Artificial intelligence existed as an idea decades ago. Often, it was just pure fiction, such as Mary Shelley’s Frankenstein, or the human-like robot in Erewhon. But by 1950, Alan Turing, one of the fathers of modern computing, was writing serious research papers about the topic. Famously, coining his “imitation game” or Turing test in a paper published in that year. However, it was only recently that computers became powerful enough to display any form of intelligence.
Some of the earliest examples were artificial neural networks programmed to recognize hand-written letters and numbers. These were soon joined by early systems for performing speech-to-text conversion. However, these systems were all extremely limited, falling into the category of narrow artificial intelligence at best. That all began to change with the creation of the cloud.
What exactly is AI?
AI is a catchall term for a number of technologies that allow computers to behave intelligently. These include machine learning, computer vision, natural language processing (NLP), and deep learning. Let’s look at each in turn:
Machine learning (ML) is the process of teaching a computer how to recognize patterns in data or perform a task without explicitly programming it. The classic tasks include categorization (deciding which category a piece of data belongs in) and forecasting (predicting a new value based on its history). The classical type of ML is called supervised learning. Here, you use a large volume of labeled data to teach the computer what patterns to look for. Unsupervised learning allows the computer to identify interesting patterns in unlabeled data, for instance, spotting clusters or outliers. Reinforcement learning happens when the computer is “rewarded” each time it makes a correct decision.
Computer vision allows computers to analyze the contents of still and moving images in order to identify what is shown. This involves several tasks. Feature extraction identifies edges and regions in the image. Image segmentation then works out which bits of the image are related. Image recognition involves identifying the object shown. For instance, identifying pictures of cats. Finally, the computer will try to work out the relationship between all the objects. This last part is vital for self-driving vehicles, which need to differentiate between a person on the pavement and one about to cross the road.
Natural Language Processing
Natural language processing (NLP) involves teaching a computer to parse and understand human language. This requires the computer to use speech-to-text to convert the language to text (if it isn’t already). Then, it analyzes the grammatical structure of each sentence and tries to extract the meaning. This is particularly hard since humans like to use idioms, analogies, metaphors, exaggeration, etc.
Deep learning involves the use of a deep neural network. These artificial neural networks are designed to replicate how the human brain works. They are built from a huge number of artificial neurons. There isn’t room here to explain how they work but there is a nice explanation here. This allows computers to exhibit so-called general artificial intelligence. All the AI techniques listed above are narrow AI. That means they can only tackle one task and must be trained for that specific task. General AIs are able to teach themselves new tasks, just as we humans can teach ourselves.
AI, a child of the cloud
The key to all of these forms of AI is their need for computing power. And the source of that power? Typically, the cloud. Let’s look at three examples that could only happen thanks to cloud computing.
- AlphaGo: Google’s DeepMind is a hugely capable deep learning system. It is built on top of a huge, deep neural network. AlphaGo is a perfect example of how deep learning can teach itself. In 2014, DeepMind began training it how to play the game, Go. By 2017, it was so capable that it could beat the top players in the world. It became so good by repeatedly playing against itself and against human players to learn the rules and strategies needed to win. This was only possible because AlphaGo could draw on the power of thousands of computers and hundreds of GPUs connected into one huge virtual machine.
- Virtual voice assistants: Almost every household has at least one virtual voice assistant, be it an Amazon Echo device with Alexa, an Apple device with Siri, or an Android device with Google Assistant. So, how do these work? In nearly all cases, the actual device only does minimal processing, hence the relatively low price of things like Echo Dot. The device typically is listening for a “wake” word. When it hears this, it starts recording your voice and sends this to a cloud service for processing. That cloud service uses a combination of speech-to-text and natural language processing to interpret what was said. It then decides on an appropriate response and sends this back to the device. This simply wouldn’t be possible without the cloud.
- Self-driving cars: Much has been said about the advent of self-driving cars and other vehicles. All these rely heavily on computer vision coupled with other sensors, such as LiDAR. These allow the vehicle to “see” its surroundings and then make intelligent decisions about the correct action to take in any given circumstance. Of course, self-driving vehicles cannot be permanently connected to the Internet, so they can’t rely on the cloud. Instead, they use extremely powerful small computers, such as Nvidia’s Drive series of products. This computer runs the multiple machine learning algorithms that are used to analyze the data and make decisions. However, these models need to be trained. And that’s only possible with the power of the cloud. Models for self-driving cars are trained on gigabytes of input data. This process requires huge computing and storage resources that are only available in the cloud.
Virtual voice assistants like Alexa live in the cloud, not in the edge device
Leveraging the cloud for AI-powered test automation
Here at Functionize, we have long seen AI as essential for modern test automation. With AI, we can create self-healing tests. In turn, that has allowed us to banish test debt, a debilitating condition that leaves QA teams unable to deliver. It also powers Architect, our smart test recorder. This opens up test automation to everyone while still offering power users the ability to test APIs, create custom validations, and leverage complex test data management. Once created, a test can then be executed on any browser, even mobile, without needing to be modified. Those tests can be launched from anywhere in the world, greatly improving localization testing.
But all this is only possible because we embraced the power of the cloud. Over the years, we have recorded billions of data points relating to tests. Each step of each test you run includes millions of items, along with before and after screenshots. All this data has allowed us to create extremely capable AI models that apply deep learning, computer vision, and NLP. That simply would not be possible for anyone relying on local execution without the scale the cloud enables. Being a cloud-first company also ensures we have a huge and growing volume of data on which to build and refine our models. Furthermore, the virtualization and emulation capabilities of the cloud are what allow us to test across so many browsers and geographic locations and at such a large scale.
If you want to find out more about how we make use of the cloud, or if you are intrigued to see how this can help your digital transformation, book a demo today.