Avoiding unintended consequences

 

How AI and automation will change the way we live

Artificial Intelligence is a true game changing technology. In time it may prove as disruptive as steam power, mains electricity and pervasive computing were. But do we really understand what we’re doing and the impact of AI in society?

AI in society

The law of unintended consequences

History is littered with unintended consequences of our actions and innovations. Take the industrial revolution. The invention of steam power led directly to mechanization and the birth of the factory. One of the direct consequences of this was a mass migration from the countryside to the cities. In turn that led to a general decline in public health with frequent cholera epidemics due to overcrowding and lack of sanitation. Clearly, that was an unintended consequence. But without this degree of overcrowding would John Snow have been able to identify the water pump at the center of the 1854 cholera outbreak in London? By proving that Cholera is caused by dirty water, Snow undoubtedly saved many lives.

From http://matrix.msu.edu/~johnsnow/images/online_companion/chapter_images/fig12-5.jpg

Map of the water pumps responsible for the 1854 cholera outbreak in London. If you are interested in this story, read The Ghost Map by Steven Johnson (ISBN 1594482691)

Artificial Intelligence in Society

Have we opened Pandora’s Box?

Artificial intelligence is undoubtedly one of the biggest game changers in recent history. But how will we deal will we deal with the impact of AI in society?

Artificial intelligence is revolutionizing business and is a truly disruptive technology. AI describes any system where computers show some degree of intelligence. Usually, this is by some process of pattern recognition coupled with decision making based on likely outcomes. Broadly there are 5 forms of AI:

  • Machine Learning (ML) is where we teach computers to recognize patterns using large known datasets. This underpins many of our AI applications.
  • Deep Learning is a subset of ML. It uses deeply nested artificial neural networks to allow a computer to learn from its own mistakes. This is a fast-growing area of AI research and allows computers to win at strategy games like Go.
  • Natural Language Processing (NLP) is the ability of computers to understand and parse natural (e.g. human) language. It is one of the key enablers for virtual assistants such as Alexa, Siri and Google Assistant.
  • Machine Perception allows computers to understand complex data sources and extract their semantic meaning. This covers things like computer vision (e.g. teaching a computer to recognize a cat), video processing (e.g. teaching a computer to track a dancer in a video) and voice to text (where a computer parses your voice commands).
  • Generative Adversarial Networks are one of the newest applications of AI. They use two separate systems. One seeks to generate data from random noise while the other tries to tell whether it can distinguish it from its training data. This approach is the basis for so-called “deepfakes” where you make it seem like a celebrity has said or done something illegal or immoral. 

From https://xkcd.com/1838/

What will be the impact of AI on society?

This is a question that we are only just starting to ponder. One thing is certain, we will see a real and lasting impact from AI in society. For every benefit AI will bring, there will be a risk to us. Let’s look at a few ways AI will affect society.

Autonomous vehicles are potentially one of the biggest applications of AI. The likely benefits include reductions in pollution, improved road safety, and reduced traffic congestion. However, autonomous trucks and taxis will lead to huge job losses for professional drivers.

Voice Assistants are becoming ubiquitous, and for many of us, they reflect our main interaction with AI. Undoubtedly, being able to control computers and devices by voice is a positive thing. However, they are currently still incapable of showing human traits such as empathy, humor or care – consider how this may affect children growing up with Alexa in their bedrooms.

Legal Systems may be transformed by AI. Already we have police using AI to predict crime hotspots and patent attorneys using AI to improve patent searches. But AIs can suffer from reinforcement bias as they are trained on datasets that are inherently biased – if you know an area has a lot of crimes, you do more proactive policing and lo-and-behold you find more crime.

Medicine is another area that could be transformed. In particular, the ability of AIs to process images and spot skin cancer or to analyze new proteins and identify likely new cures could prove revolutionary. However, such AIs are only as good as the training data, and often there simply isn’t enough such data to use.

What is the obvious problem?

More broadly, AI looks set to displace a lot of low-skilled, low-paid, repetitive jobs such as checkout operators, cleaners, miners and construction workers. But AI is extremely unlikely to replace higher-paid, higher-skilled, unpredictable jobs such as nurses, teachers, designers or managers. The reason for this disparity in the impact of AI is that it is much easier to teach an AI to do “left brain” tasks than “right brain” ones. In other words, it’s comparatively easy to teach an AI to drive because that is simply applying a set of fixed and predictable rules. By contrast, it’s much harder to teach an AI to care for someone like a nurse because that requires teaching empathy.

If you have Alexa (or another virtual assistant) you can test how hard it is to teach empathy to an AI. Just try something like “Alexa, I don’t feel well” or “Alexa, I feel sad”. The response will likely be as warm, comforting and empathetic as Sheldon being caring.

But what is the real problem?

The biggest issue is that computers are being taught innate bias against certain groups and in favor of others. Machine learning will automatically pick up any bias in its training data. Sadly, most training datasets are inherently biased because they reflect societal preconceptions. Say you wanted to teach an AI to recognize doctors based on watching clips of doctors appearing in films. The AI will start to believe that doctors are generally white, middle-aged and always wear a white lab coat and stethoscope. Try this out by doing a Google Image Search for doctor. This is because film and TV producers tend to stereotype certain roles, especially if they are only minor characters.

Try searching for "doctor" on Google images!

Equally, you might try searching for “secretary” and you will find most of the images are of young, good-looking females, several of them with distinct sexual undertones. As already mentioned above, this bias can cause real issues where the AI is being used for something as fundamental as policing. This issue can become worse if the system is using reinforcement learning where it learns from its interactions with people. Indeed, this fact can be exploited to fool image recognition systems by simply changing a few pixels.

So what can we do to improve this? Well, for one thing, we should try and redress the gender imbalance among AI researchers (around 90% of whom are male). Secondly, we should start planning for the future. In parallel with trying to encourage more women into computer science and AI research, we should be concentrating more on teaching “soft skills” at school, since these are going to be what makes people employable in a world dominated by AI. Finally, we need to be more skeptical of AI and learn to recognize its shortcomings and potential biases.

Conclusions

In conclusion, AI is a game-changing technology and one that is here to stay. However, we need to be more wary about creating computers that reflect the worst in humanity in terms of racism, sexism and other forms of bias. We also need to prepare for the fundamental impact of AI on society. Society has been through such upheaval in the past, but the more prepared we are, the better we will be able to cope.

Addenda

You may well be wondering what all this has to do with autonomous testing? Well, like all responsible companies, we at Functionize are all too aware of the potential downsides of AI. This is why our products are focused on helping to improve your lives as testers. We have always viewed AI as a way to increase the efficiency and productivity of testers rather than replacing you altogether. And fortunately for us, it is nigh impossible for our products to display the sort of negative bias mentioned above! If reading this blog makes you stop and think about the impact of AI on society, then we will be happy.

Ready to Experience the Power of Functionize?

GET STARTED