Artificial intelligence existed as an idea decades ago. Often, it was just pure fiction, such as Mary Shelley’s Frankenstein, or the human-like robot in Erewhon. But by 1950, Alan Turing, one of the fathers of modern computing, was writing serious research papers about the topic. Famously, coining his “imitation game” or Turing test in a paper published in that year. However, it was only recently that computers became powerful enough to display any form of intelligence.
Some of the earliest examples were artificial neural networks programmed to recognize hand-written letters and numbers. These were soon joined by early systems for performing speech-to-text conversion. However, these systems were all extremely limited, falling into the category of narrow artificial intelligence at best. That all began to change with the creation of the cloud.
AI is a catchall term for a number of technologies that allow computers to behave intelligently. These include machine learning, computer vision, natural language processing (NLP), and deep learning. Let’s look at each in turn:
Machine learning (ML) is the process of teaching a computer how to recognize patterns in data or perform a task without explicitly programming it. The classic tasks include categorization (deciding which category a piece of data belongs in) and forecasting (predicting a new value based on its history). The classical type of ML is called supervised learning. Here, you use a large volume of labeled data to teach the computer what patterns to look for. Unsupervised learning allows the computer to identify interesting patterns in unlabeled data, for instance, spotting clusters or outliers. Reinforcement learning happens when the computer is “rewarded” each time it makes a correct decision.
Computer vision allows computers to analyze the contents of still and moving images in order to identify what is shown. This involves several tasks. Feature extraction identifies edges and regions in the image. Image segmentation then works out which bits of the image are related. Image recognition involves identifying the object shown. For instance, identifying pictures of cats. Finally, the computer will try to work out the relationship between all the objects. This last part is vital for self-driving vehicles, which need to differentiate between a person on the pavement and one about to cross the road.
Natural language processing (NLP) involves teaching a computer to parse and understand human language. This requires the computer to use speech-to-text to convert the language to text (if it isn’t already). Then, it analyzes the grammatical structure of each sentence and tries to extract the meaning. This is particularly hard since humans like to use idioms, analogies, metaphors, exaggeration, etc.
Deep learning involves the use of a deep neural network. These artificial neural networks are designed to replicate how the human brain works. They are built from a huge number of artificial neurons. There isn’t room here to explain how they work but there is a nice explanation here. This allows computers to exhibit so-called general artificial intelligence. All the AI techniques listed above are narrow AI. That means they can only tackle one task and must be trained for that specific task. General AIs are able to teach themselves new tasks, just as we humans can teach ourselves.
The key to all of these forms of AI is their need for computing power. And the source of that power? Typically, the cloud. Let’s look at three examples that could only happen thanks to cloud computing.
Here at Functionize, we have long seen AI as essential for modern test automation. With AI, we can create self-healing tests. In turn, that has allowed us to banish test debt, a debilitating condition that leaves QA teams unable to deliver. It also powers Architect, our smart test recorder. This opens up test automation to everyone while still offering power users the ability to test APIs, create custom validations, and leverage complex test data management. Once created, a test can then be executed on any browser, even mobile, without needing to be modified. Those tests can be launched from anywhere in the world, greatly improving localization testing.
But all this is only possible because we embraced the power of the cloud. Over the years, we have recorded billions of data points relating to tests. Each step of each test you run includes millions of items, along with before and after screenshots. All this data has allowed us to create extremely capable AI models that apply deep learning, computer vision, and NLP. That simply would not be possible for anyone relying on local execution without the scale the cloud enables. Being a cloud-first company also ensures we have a huge and growing volume of data on which to build and refine our models. Furthermore, the virtualization and emulation capabilities of the cloud are what allow us to test across so many browsers and geographic locations and at such a large scale.
If you want to find out more about how we make use of the cloud, or if you are intrigued to see how this can help your digital transformation, book a demo today.