Even if you haven’t seen 2001: A Space Odyssey, you have likely heard that famous line uttered at one point or another. Released nearly 50 years ago, Stanley Kubrick’s masterpiece, specifically that scene, remains the symbolic gold standard by which modern popular culture marvels with a mixture of fear and awe at the existence of artificial intelligence (AI) and the implications of our own demise associated with its advancement.
AI is the science and engineering of machines that act intelligently. Advancements in the field have expanded to include what would likely have seemed like science fiction 10 years ago, but also include the more mundane applications we use in daily life that most of us never even consider. Things like the spam filter in our email programs or technology the doctor uses to look up substitute medications when the ones we normally use are out of stock. Applications that are largely invisible, but we’d nonetheless immediately miss in their absence.
Even the eye-widening awe we experience reading the latest AI breakthrough involving the replacement of a patient’s lost leg with an artificial one – one that, just as with an original limb, is controlled by the owner’s thoughts – engenders only the most positive of predispositions to this rapidly advancing field. Indeed, the happiness and improvement to the quality of life of even one individual who benefits from this kind of breakthrough far exceeds our ability to measure.
But hold on. What about AI’s current capacity to replace things less urgent than our arms and legs? I’m referring to any number of human white collar jobs previously not considered at risk to the miracle of automation – all the way from attorneys to doctors and financial analysts to journalists. “Journalists? This is starting to get a little uncomfortable,” says yours truly to no one in particular. “This is moving a little fast, don’cha think!?”
“Faster than you can say ‘built-in obsolescence,’” my inner insecurity replies.
“Oh! Obsolete!” I say aloud, joggling the digital tile lights along the edge of my phone so as to get a triple word score and sink my Words with Friends opponent as I simultaneously skim the day’s headlines in search of ideas to support my story.
Suddenly, as if to squelch my attempt to deflect my own fear of professional demise with humor, I spot a headline sure to accelerate my stress levels about a controversial announcement that Italian neurosurgeon, Sergio Canavero, will have the surgical and technological capability to perform the world’s first head transplant by 2017.
Wait – WHAT?!??! OK, technically speaking, Canavero’s proposed procedure does not fall within the boundaries of the field of artificial intelligence, but the collective recoil felt by most is related to the notion that while arms and legs and eyes and teeth, etc. are all replaceable, ethical lines end at anything involving the brain, which is where artificial intelligence’s greatest advances as well as its greatest perceived threat begins.
Enter Ray Kurzweil, Google’s director of engineering and author of the 2012 book, How to Create a Mind, who claims that between 2030 and 2040, humans will have the ability to link our brains to cloud computers. Well, why would we want to do that? Simple. Because in doing so, the cloud aspect or non-biological part of the brain would “ultimately become so intelligent and have such vast capacity, it’ll be able to model, simulate and understand fully the biological part,” Kurzweil explained at the Exponential Finance conference in New York on June 3, according to. “We will be able to fully back up our brains.”
This transcendence, known within the field as “singularity,” is a term that dates back to 1958, but wasn’t popularized until 1993 by mathematician and science fiction author Vernor Vinge to describe the point at which artificial intelligence exceeds biological intelligence.
In the popular culture, singularity is a longstanding concept explored in works of science fiction, most notably beginning with Kubrick’s HAL 9000 in Space Odyssey and culminating most recently with the recent release of Ex Machina. The plot narrative of these works typically operates along the line that man and machine are separate entities engaged in the struggle for self-preservation at the expense of the other’s demise. That is, man builds machine, machine outsmarts man, man must destroy machine before machine destroys man, the end result of which sometimes culminating with the triumph of man, other times with the triumph of machine. Both endings, however, have traditionally been constructed and received as cautionary tales.
Certainly, our collective cultural fear of this ultimate “other” can easily be overplayed by mainstream books and movies and many actual scientists are likely to say the real caution needed is in taking such scenarios too seriously.
“Easy for them to say,” I mutter while considering the possibility that Wall-E’s city girl cousin is two cubicles over and being interviewed as my possible replacement.
Pop culture aside, it’s worth noting that a few of the most highly revered scientific minds, including Bill Gates and even Stephen Hawking, have expressed their own considerable hesitation, with Hawking going so far as to cite the possibility that it is humankind’s “worst mistake.” Hawking, one of the greatest – if not the greatest – living minds today, is a quantum physicist and has no official academic expertise in the realm of artificial intelligence; so while his considered opinion on nearly any topic is of inherent value, it is not necessarily equal in weight to those who are experts in the field.
Moreover, if we accept Kurzweil’s vision, the inevitable struggle between man and machine depicted by mainstream sci-fi culture will never take place as it is popularly depicted simply because the technology will not exist as a self-contained entity, but rather will ultimately and benignly become integrated into our own biological makeup, thus effectively transforming both the technology and humans into a hybrid life form, each part of which requires the other to maintain its existence.
In the meantime, there are risks right now associated with the technology that must be considered and addressed. Technology used in Silicon Valley could just as easily be put to use in terrorist-controlled regions of the Middle East. Things like the ability to disrupt or even sink a given economy with one stroke of a hacker’s binary code-bending fingers or the possibility that a terrorist could knock out the power grid completely, leaving us not only without lights, but without ready access to food or water or any of the all-encompassing and all-too-often-invisible advantages of our highly automated daily lives. Indeed, the technology that modern, mainstream society demands, and some would argue, foolishly surrenders itself to, is the very same technology that in the wrong hands could put us at the mercy of those who wish to destroy us – and all long before our culture advances to the point of technological singularity.
Those and various other equally dangerous scenarios are currently being studied and evaluated by the Centre for the Study of Existential Risk at University of Cambridge and are cited as examples with which some who are less enthusiastic about the cost-benefit ratio as well as the overarching morality issues within the rapidly advancing field of AI are urgently concerned.
When confronted with these issues, the more enthusiastic and visionary experts like Kurzweil take an optimistic and decidedly antithetical perspective. It is Kurzweil’s belief that the very risks mentioned as reasons to possibly restrain the technology’s advancement and widespread availability to a level more conducive to careful evaluation and effective risk avoidance is actually the reason for doing exactly the opposite. Citing the continued development of AI while evaluating and controlling its risks as the true moral imperative, Kurzweil brushed off overly hesitant naysayers at the Exponential Finance conference recently by stating what at first seems counterintuitive, but could in fact be quite logical. As quoted in Singularity Hub: “The best way to keep [artificial intelligence] safe is in fact widely distributed, which is what we are seeing in the world today.”