From global warming to meteoric impact, from plague-induced extinction to nuclear armageddon; humanity’s collective imaginations have conjured up a massive cast of what we call ‘existential threats’: forces so dangerous that they might bring an end to our entire species. Some of these concepts are more scientific than others — the madness of the Cold War’s mutually assured destruction, for example, is far more feasible on its surface than the titillating violence of a so-called ‘zombie apocalypse’. I mean, come on, do you mean to tell me those shambling, moaning, fungus-brained corpses could gain a foothold in the United States, which has more guns than people?
Squeezed into this assembly of global catastrophic risks is a relatively recent doomsday concept that seems uniquely suited to this time of technological change: artificial intelligence, commonly abbreviated to just AI.
Here’s the theory behind this one, for the one or two readers out there who may be unfamiliar: human beings are capable of limited amounts of intelligence, due to the constraints of biology. By contrast, the machines we construct become more intelligent all the time. One day, we might construct a machine that’s so intelligent, it can make another machine that’s even more intelligent. That machine would make a more intelligent machine, and so on, and so on (this is called foom, in reference to sound effects from comic books). Before long, a machine would come into existence with intelligence so far beyond our own that we would be helpless to stop it. At that point, the machine would crush humankind beneath its implacable metal heel.
This concept has permeated popular culture in various media for decades, from 2001: A Space Odyssey (1968) to Terminator (1984), from Star Trek (1967) to that splendid stand-alone movie The Matrix (1999), of which a reboot has been rumoured. Our science fiction warns us not to delve too far down the rabbit hole of robotic sentience, lest our ‘mechanical progeny’ grow too powerful and bring about our end. Scientific and technological powerhouses like Elon Musk, Bill Gates and even Stephen Hawking have warned about the risks of runaway AI. Compared to many of the existential threats we face, just how feasible is this concept of domination and annihilation by Alexa, Cortana, and Siri?
Open the pod bay doors
Well, it’s certainly more likely than a zombie apocalypse. But then, artificial intelligence itself is one of the most misunderstood fields of science out there. For one thing, you don’t need to worry about your computer gaining sentience and committing tax fraud (though you should have antivirus software to keep another sentient agent from infecting your machine). Most of the machinery we use to run our modern world has intelligence so limited we wouldn’t even recognise it as such. The ability to instantly calculate the square root of 5,781,237 (just over 2,400, I googled it) is impressive, but shallow. These programs and machines are what, in human terms, we might call idiot savants. Real AI (so-called ‘general intelligence’ — a mechanical mind that can reason, learn, and make decisions for itself) is so far away as to be almost a non-issue. It could happen in my lifetime, but I doubt it. Creating an artificially intelligent mind is more complicated than telling a moderately advanced program to replicate and improve itself ad infinitum. In addition, intelligence and motivation are separate entities; even if a machine became capable of far superior intellectual capacity to humans, there’s no reason to believe such a machine would desire to enslave or annihilate us — that’s just something humans do.
I could go into detail, but the long and short of it is: we’ve probably got nothing to worry about. If you’re concerned, I highly recommend picking up Stephen Pinker’s 2018 book, Enlightenment Now, and perusing the chapter on existential threats (AI is covered starting on page 296). I know I felt better about humanity’s chances for long-term survival after reading it.
All the same, there are reasons that AI inspires horror in the hearts of sci-fi writers and petrifies some of our greatest scientific minds with fear. Some of those reasons are rooted in real world events, brought on by real AI. Some are psychological phantoms, tricks of the way the human mind works. Others are more theoretical.
Real life Terminators
Let’s start with drones. They aren’t just the annoying little copters employed by photographers to ruin your skyline. Since 9/11, the US has made extensive use of drones to kill terrorists, with some civilian casualties. These drones are basically bombs with miniature engines strapped to them, and are both cheap and effective. They’re deadly, and practically unstoppable; once the military finds a terrorist and decides they’d like him dead, it’s a certainty. These are death machines, plain and simple, even if they’re controlled from launch to detonation by a human ‘pilot’. They’re not intelligent, but they are killing machines.
Next, the self-driving car. We don’t yet have autonomous cars, and there’s a reason. Just one; a woman named Elaine Herzberg was killed by a self-driving car undergoing testing in March of 2018. Testing of all self-driving cars was then temporarily halted, as some assumed that the technology had been proven to be unsafe. Testing resumed at the end of that year, and is still underway.
Though it hasn’t directly resulted in harm or death to humans, mechanical systems have resulted in economic damage to people — specifically, so-called ‘unskilled workers’. As our capability to create machines increases, we are increasingly able to automate processes that once needed to be conducted by hand, and do it for less money than the workers were paid. This is part of an ongoing process that’s been happening since the industrial revolution, as various innovations make it easier for small numbers of people to do larger amounts of work, but it still has a devastating effect on a labour force. People who’ve spent their entire lives working factory jobs are suddenly standing in the unemployment line.
There have been few to no further notable incidents in which ‘intelligent’ machines have killed people. Nonetheless, humans seem to harbour a particular menace towards machines. Robots filling assorted roles across the world are beaten and dismembered by humans on a regular basis. An obvious cause might be the phenomenon discussed above, of automation removing certain jobs and pushing humans out of the workforce. Social scientists believe a possible cause of this malevolence is the ‘Frankenstein effect’, or the idea that we might grow to dislike something because it’s not quite like us. We look at the mechanical man standing guard in a mall, and we know it has our same basic assortment of limbs, but it has no moral system, no beating heart, and no wants or desires of its own. It becomes easy to project a malevolence onto a machine, or simply dehumanise it and vandalise it as one would any other piece of property (don’t do that).
Speaking of this projection, we tend to anthropomorphise machines — that is, we ascribe traits to them that only exist in humans. We often anthropomorphise animals, too, but the more similar a machine is to us, the easier it is to anthropomorphise them — and machines are getting more similar to us all the time. It’s remarkable how, in just a few short years, speech recognition technology has advanced to the level that people are putting Alexas and other virtual assistants in their homes. Many are even polite to their virtual assistants.
But this anthropomorphisation has a darker side, as well: we ascribe emotions to machines that they’re simply not capable of. We imagine them as being just like us, only smarter. A lot of this confusion comes out of our natural evolutionary ‘programming’, pun intended; our millennia of evolutionary development have resulted in mental programs that are designed to understand the natural world, not the artificial one. Machines have been around almost as long as humans have, but machines that resemble human beings in their form and function have been around for an infinitesimally small period by comparison. Is it any wonder that our primitive brains are confused by them?
There are a number of other mental heuristics that alter our perception of the way AI works and could work: a penchant for pessimism, for example. We believe that people who criticise and spout cynicism as smarter than optimists, who we perceive as naive or even stupid. Why would we believe that a functioning artificial intelligence would be safe and an incredible boon for human development, when it’s so much more satisfying to believe that any AI would be an existential threat to all humans, everywhere? It’s the same philosophy behind our current news media environment: if it bleeds, it leads. Said another way: if it could want to kill us all, it will definitely want to kill us all.
Despite AI being around for several decades in different guises, it can still be considered a new and developing field; and most people don’t understand it particularly well. Even scientists who study it as a field will admit that there’s a lot to learn. Thanks to the uncertainty surrounding AI, it’s easy to come up with outlandish notions about how a true AI might behave.
In some ways, the most horrifying part about a theoretical AI is its lack of a moral framework. Isaac Asimov’s ‘Three Laws’ system was intended to address this concern: because machines are so impersonal, they must be specifically designed to conform to our understanding of morality or they will disregard it as irrelevant; or worse, they will fundamentally misunderstand our morality in such a way as to destroy us while intending to do good. Take the machines in The Matrix that decided to make humans happy by plugging them into an ideal world, while sucking their bodies dry of energy from birth to death. From a cold, calculating perspective, that idea makes a certain amount of sense, and even some human characters appreciate it. Another AI might be designed by a malevolent actor who for some reason intends some form of mass extinction, and it could intentionally ignore all conventional morality to end human beings forever.
In this way, a machine is terrifying just as a sociopath is terrifying: they have nothing holding them back from behaviour that is absolutely merciless and destructive. Of course, even a sociopath has a face and a mind that we can understand, while an AI might very well have neither — and that absence of a recognisable form makes it even more terrifying.
Ultimately, the fact that we know so little about AI makes it easy to project horrifying doomsday fantasies onto it. As humans, we’re uncomfortable with uncertainty. The fact that we don’t know just how a real AI would work causes us to theorise endlessly, and other mental heuristics push these theories into dark, disturbing corners of imagination.
Machines have also proven themselves to be vulnerable to manipulation in ways that human beings are not. On the most basic level, it just takes a little bit of water in the wrong place to short out a cellular phone. It gets worse when you consider computer viruses, which can infiltrate almost any electronic device and destroy or pervert its intended function. Cyberwarfare is a new and terrifying field of combat, if usually less lethal than the physical kind. A particularly nasty computer virus can steal all of a person’s financial resources, essentially destroying their life. As we grow more and more dependent on machines, what might happen if control of them is wrenched away from us, and they’re turned against their former masters? It certainly happens enough in our science fiction.
On the other hand, human beings have their weak spots. We’re no paragons of moral values: for thousands of years, we’ve been killing, enslaving, oppressing, raping, and otherwise harming each other. Isn’t it a bit organic-centric to believe that intelligent beings, borne of machines, would descend into the same brutal violence and callous oppression in which we specialise?
Maybe. But in the meantime, I’m cautiously optimistic about that Matrix reboot. It could be just as tantalisingly terrifying.