Since the phrase was coined in the 1950s, Artificial Intelligence (AI) has often been associated with dystopian Sci-Fi films (think HAL in 2001, a Space Odyssey or Skynet in the Terminator series). For years, AI remained largely in the realms of Science Fiction. However, in recent years, huge advances in computing power have combined with new programming techniques to make AI a reality. In turn, businesses have begun to exploit AI in ever-more exciting ways.
The Encyclopedia Brittanica defines Artificial Intelligence as:
“the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”
The key thing that distinguishes AI from other forms of computer program is the ability to reason, learn and adapt. Early versions of AI were limited to solving symbolic representations of problems. To give an example of this, in 1996, IBM’s Deep Blue managed to beat Garry Kasparov in a game of chess. However, this was largely a “brute force” win, where Deep Blue simply went through every permutation of future move to see which would give the best results.
More recently, the focus has shifted to trying (successfully) to solve a number of difficult problems. These include:
In 2011 a combination of these techniques allowed IBM’s Watson to win at a game of Jeopardy, demonstrating that computers could learn to reason in a human-like fashion. Comparing Watson’s performance with that of Deep Blue just 15 years earlier shows how far AI has come.
AI relies on a number of programming paradigms. Often these will be used together to solve a particular problem. A few of the more important approaches are:
These techniques aren’t new. Neural networks were first proposed back in the 1940s in the very infancy of general purpose computing. Genetic algorithms date back to the 1950s, and by the 1980s were being exploited commercially. Automated Reasoning started in the realm of theoretical computer science as a way to achieve automated theorem proving (which sounds dull but is important).
AI has had a noticeable impact in several areas of the consumer technology market. Possibly the two most significant are personal voice assistants and image recognition. Voice Assistants like Amazon’s Alexa, Apple’s Siri and Google’s Assistant use a number of AI concepts including neural networks and machine learning in order to achieve the ability to transcribe your voice (speech recognition), parse the meaning (natural language programming) and generate a suitable response (reasoning).
Facebook and Google have both pioneered image recognition. Facebook uses image recognition to try and identify your friends in images that you post. It does this by using the tagging system combined with learning from its mistakes. Google allows you to search for images that look similar to ones in its extensive database. It is able to use its automated click tracking to learn which results were more accurate/appropriate. Meanwhile, in 2017 Amazon took their Echo one better and created the Echo Look. Described as a “hands-free camera and style assistant”, this device features 360º cameras and is powered by an impressive AI image recognition and processing system that is able to analyze what you are wearing and make real-time suggestions for how to improve your wardrobe choices.
Arguably, the biggest impacts of AI are ones that few members of the public will be aware of. From advertising to share trading and fraud detection to automated testing, AI has been transforming the world of business. While it is often just a buzzword, added to projects to give them some pizzazz, AI is genuinely making a big difference to the bottom line. Take one case in point. Google relies on their data centers to power their entire business. These data centers consume phenomenal amounts of electricity. In 2016, Google announced that their Deep Mind AI had been able to improve the energy efficiency of their data centers by up to 40% compared to the best that human experts had been able to achieve. Given a single data center can consume 100MW of electricity, that represents a potential saving of millions of dollars, not to mention millions of tons of carbon.
And AI isn’t just limited to big business. Hundreds of startups are also jumping on the bandwagon and coming up with novel ways to use AI to solve real problems. Take Simplaex, a small company that has developed an AI solution to one of the hardest problems for online advertising, namely, how to effectively target users and achieve the holy grail of “app retargeting” (encouraging former users to return to an app after they have abandoned it). Or how about SenseTime, a Chinese startup that recently closed a $600m Series C funding round. Their AI-driven face recognition technology is used for applications as diverse as government surveillance and checkout-less supermarkets.
As with any technology, AI can be both a boon and a bane. One of the biggest challenges in AI is the heavy reliance on Machine Learning. A recent MIT Media Labs press release describes an AI they have named Norman (after Norman Bates in the Psycho films). This AI has been trained purely on material from the nastier parts of the Internet. Effectively this has made Norman a psychopath. When shown a set of inkblots, a “normal” AI classifies them as “a close up of a vase of flowers”, “a black and white photo of a small bird” and “a person holding an umbrella in the air”. Norman saw those same inkblots as “a man is shot dead”, “man gets pulled into dough machine” and “man is shot dead in front of his screaming wife.”
One of the key problems is that machine learning algorithms are extremely good at learning but are incapable of filtering the training material used to teach them. Consequently, they pick up the bias of their developers and trainers. In some cases, they can even be subverted. In May 2017, a report showed that an AI system used to risk assess felons in a US Court was heavily biased against black prisoners as a result of the historical records it was learning from. Effectively this system was entrenching a decades-old bias against black prisoners.
On the flip side of the coin, AI can be used for incredible good. At the end of May this year, a multi-national team from the US, Germany, and France announced that they had created a convolutional neural network able to accurately diagnose 95% of skin cancers, compared with about 87% achieved by expert dermatologists. AI is also being used to improve the assessment of mammograms, allowing increased accuracy and improvements in early detection of breast cancer.
Nowadays every business plan is laced with terms relating to AI, like machine learning, intelligent applications, and computer vision. As a result, AI is in danger of becoming an over-used term. Coupled with that is the problem that many business leaders admit they and their employees lack the skills and understanding to properly leverage AI.
However, there is undoubtedly a big future for AI in both the consumer and business worlds. In a 2017 article, Tech Crunch stated that “startups are at war over having the most valuable artificial intelligence and at the core of this war is having unique high-quality visual data." Fighting this war is driving a new generation of startups to push the boundaries of AI ever further.
For 60 years after the term was coined, Artificial Intelligence was an obscure branch of theoretical computer science and philosophy and a mainstay of Dystopian Science Fiction. But in the past decade, it has emerged to become one of the driving forces behind many of the greatest innovations in computing. Given that the rate of change is accelerating, it’s hard to predict exactly where AI will take us in future. All that is certain is that it is definitely here to stay, and as such, should be embraced by businesses large and small. At Functionize, we're equally excited where AI is taking quality.