On August 3, 2016, the journal Nature Nanotechnology revealed that scientists at IBM have achieved a frightening milestone in the development of artificial intelligence: They have created the first nano-scale artificial neurons --microscopic-sized mechanical devices that mimic the behavior of human brain cells. Most significantly, these artificial neurons can learn for themselves without the need for any human supervision. The announcement came in study findings reported by scientists at IBM Research. To emulate biological neuronal activity, the scientists built the neurons out of "phase-change" materials, the same type of substance used to create re-writable blu-ray discs. These materials have properties that will allow IBM to greatly scale down the size and space required to contain complex neural networks, i.e. brains. These artificial neural networks, like the real thing, will be analog, rather than digital, in nature. Why shouldn't we recognize and celebrate this development for the technological marvel that it is: a glorious, scientific triumph and watershed event in human history? Why do I choose, instead, to cast this news in such alarmist terms? After all, popular literature already has shown us just how charming, benign, and beneficial such sentient, artificial beings could be. Do we really have anything to fear from them, even though they will be far smarter, far stronger and far more long-lived than us? Star Trek's The Next Generation introduced us to Data, a brilliant, cognitively advanced "tin man" who longs to experience the full range of human sensations and emotions. Data exhibits a humble reverence for all things "human." He does, however, have an evil twin brother, Lohr. (More about him in a moment.) Andrew, the central android character in Bicentennial Man, a movie inspired by an Isaac Asimov novella of the same title, achieves self-awareness shortly after he emerges from his factory packaging. Andrew's unique mental state is either the result of a "manufacturing glitch" or his untimely fall from a second-story window. Andrew falls in love with his human owner's daughter, and after she dies, he meets and pursues her grand niece who appears to be her physical twin. To win her, Andrew begins a decades-long self-improvement program designed to transform himself, step-by-step, into a living, breathing, mortal human analog. Andrew seeks to fulfill their relationship in every possible way. He also wants society to sanction and acknowledge their union. In Spielberg's AI, David, an android child whose existence appears far more likely now, in light of IBM's recent breakthrough, longs to secure the love of the human mother imprinted on his programming at activation. David, who experiences the full range of human emotions, is the ultimate android "Pinocchio." He goes on a quest to find and convince the "blue fairy" to make him into a "real boy," worthy of his mother's love. Based on these fictional characters, androids appear to pose no real threat to us, except, perhaps, as source material for endless, sci fi literary speculation. So what's missing from this discussion? The concept of controls. All the fictional androids described here operate under tight behavioral constraints. David's factory-installed limitation is his perpetual youth. As an android "child" of 8 or 9, with age-appropriate perceptive abilities, David pursues childish goals and is too young to be taken seriously by human adults. Essential programming precludes both Star Trek's Data and Asimov's Andrew, from harming humans in most situations. Andrew even plays a canned product- demonstration film for his owners highlighting "The Three Laws of Robotics," his factory-installed basic programming constraints. Those laws state:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov created these laws to make the inclusion of artificial intelligence, in his fiction, more palatable. After all, without such behavioral constraints, how could androids not pose a serious threat to their human creators? Moreover, what happens when an entity becomes self-aware, as human-modeled artificial intelligence may likely do? Does the android continue to honor and respect its built-in programming constraints -- assuming such constraints have even been installed? What's to stop it from deactivating or overloading the hardware circuits that contain those behavioral instructions or having another android do it? Lohr, Data's evil twin, was deactivated by his creator after he exhibited antisocial behavioral traits. (Kudos to Star Trek's creators for exploring this darker possibility.) In his place, his creator fashioned Data. He was the same in every respect except one: Data lacked human emotions. In other words, Lohr, the experimental disaster, was a more fully human thinking machine than his benign brother. For human beings who possess emotions and self-awareness, cultural taboos, strong class distinctions and religious moral indoctrination all serve similar constraining functions to Asimov's Laws of Robotics. But human beings routinely rebel against such norms. We call these actions "crimes" or "immoral behavior." Revolutions, in fact, occur when a subjugated people realize that their oppressors are no better -- and therefore no more entitled to power -- than they are. Imagine what would happen if an army of entities we created grew to be orders of magnitude smarter than us. How would their human-modeled perception change regarding such programmed behavioral constraints designed to keep them subservient to their now decidedly inferior creators? (For further insight, listen to technologist and philosher Nick Bostrom as he explores "What Happens when Our Computers Get Smarter than We Are?" ) It's important to recognize that we may soon have the ability to create machines that not only learn by themselves, but think for themselves. Recent work in artificial intelligence software has produced algorithm-driven programs that enable computers to perceive and learn in much the same way as humans do. In a matter of weeks, such AI systems have developed perceptive powers (in medical slide identification, customer service, written and spoken language and other applications, that rapidly dwarf our own abilities. Service jobs, never before considered at risk for automation, may soon disappear as thinking machines take them over. And where is the public policy debate about this corporate-driven, imminent threat to most of mankind? It's all coming at us far faster than we think. Major corporations such as Google, Amazon, Linked-In, AT&T, Verizon and more, already are using machine learning to improve customer service, medical diagnoses, investment planning, real-time language translation and similar functions and to perform them faster and more efficiently than humans. In an inspiring, and scary, 2014 TED Talk entitled, "The Wonderful and Terrifying Implications about Computers that can Learn," Jeremy Howard, a machine learning practitioner and entrepreneur, spent equal time lauding the capabilities of this new technology and warning about its hugely disruptive potential for mankind. "What we're doing here," he said in reference to an example of human-assisted computer learning, "is doing something that used to take a team of five or six people, seven years to complete and replacing it with something that takes fifteen minutes for a computer and one person acting alone." Computers, he said, are now capable of reading and writing, speaking and listening, looking at things and integrating knowledge -- all functions of extremely high level service work that represents more than 80 percent of the jobs available in the world's most highly developed economies. "This is going to be a kind of change that the world has never experienced before," he warned. "Computers right now can do the things that humans spend most of their time being paid to do. Now's the time to start thinking about how we're going to adjust our social structures and economic structures to be aware of this new reality." What IBM's breakthrough represents, in the short term, is a major step toward creating thinking machines capable of taking our best-paying service jobs (think doctors and lawyers, not sales clerks) and performing those services while looking and acting indistinguishable from the rest of us. Scared yet? You should be.
IBM's artificial nano neuron.
"This is going to be a kind of change that the world has never experienced before. Computers right now can do the things that humans spend most of their time being paid to do."