How far back goes the fear of machines rising up and overtaking humanity? I’m sure there are historians, anthropologists, and sociologists who know the exact answer. (Research has never been this blog’s strong suit.) But certainly to the Industrial Revolution, and I’ll wager all the way back to a Kubrickian scene of prehistoric man realizing, “Whoa, Grok can use bone as weapon!”
Indeed, war between man and machine is practically the single most prevalent subject in all of science fiction, in a dead heat with extraterrestrial life and space travel. In fact, it is the very plot of the 1920 stage play that gave us the word robot, ”R.U.R,” by the Czech playwright Karel Čapek, derived from the Slavic word “robota” [rah-boat-ah], meaning “work.”
The actual manifestation of this techno-exististentialist nightmare varies. Robots, computers, artificial intelligence, or technology of any kind are all just variations on the theme, that of mechanical creatures evolving to a point of mental capacity and/or physical strength that they overthrow and destroy (or enslave) us. Typically, the trouble begins with the robots’ own forced servitude to their human creators, usually as menial laborers, household servants, mercenaries, concubines, and the like, leading ultimately to rebellion and violent reversal of the status quo. (The subjugation of mankind by non-technological threats—such as alien invaders, or counterfactual evolution a la Planet of the Apes—is both an offshoot of this genre and its larger context.)
Though not my favorite of the bunch, The Matrix offered one of the most chilling—and innovative—visions of this scenario, one in which much of humanity is blissfully unaware of its enslavement. Real life (or is it?) offers arguably an even more extreme version, in which humanity actually welcomes its servitude to its technological overlords. (You’re reading this online, aren’t you? Maybe on your phone?)
Take the blue pill and call me in the morning.
Even when they are not front and center, sentient robots and their emotional issues feature in many many other works of science fiction, from Hel in Metropolis, to Data in Star Trek: The Next Generation (as well as the Borg for that matter), Alien, Lost in Space, the Star Wars series, and on and on. Sometimes those stories can be as poignant as anything in the so-called human condition; I refer you to Rutger Hauer’s final speech in Blade Runner. On a lighter note, the prize-winner may remain Sleeper, in which Woody Allen’s fugitive human time traveler tries to disguise himself as a suspiciously bespectacled robot, one who develops an unnatural attachment to “the orb,” itself a technological replacement for manmade pleasure. (I did say it was a Woody Allen film, right?)
And man’s fear of uppity robots shows no signs of abating—on the contrary. As technology continues to advance at a dizzying rate, the issue has passed from dystopian science fiction to a genuine worry that occupies prestigious scholars, futurists, public intellectuals, and other thinkers, often leading to exceedingly grim forecasts of the rise of a godlike artificial intelligence that renders humans extinct, or makes us wish we were.
So the question before us is this:
Would that really be such a big deal?
I am not bothered by the robot uprising.
I view it as a natural (though not inevitable) next step in the evolution of life on Planet Earth. In the same way that dinosaurs gave way to mammalian life and eventually homo sapiens, why shouldn’t carbon-based life eventually give way to something superior….and is there any reason that superior form of life might not be silicon-based?
It hardly bears repeating the ways in which digital technology has changed our lives. Smartphones, computers, the Internet, the end of photorealism……the scope of this transformation is endless, and it’s far from done. Many have argued convincingly that the Information Revolution that we are living through will dwarf the Industrial Revolution as a tectonic shift in human history. If so, it may well be the last such shift, at least as far as the adjective “human” goes.
Even before the words “silicon chip” came into being, Alvin and Heidi Toffler scared the pants off the Western public way back in 1970 with their influential book Future Shock, which argued that humanity was not equipped to handle the pace of technological change. Ted Kaczynski made a similar point a bit more forcefully, as have numerous less homicidal anarcho-primitivist intellectuals.
But even the Tofflers did not conceive of the exponential rate at which supercomputing would develop, pulling us inexorably toward the event horizon that is the Singularity, when flesh-and-blood civilization as we know it will disappear up its own rectum. Today the image of the rise of the machines is less Arnold Schwarzenegger in The Terminator than Scarlet Johansson in Her, but the net effect is the same. “Singularity” is just a much more stylish and academically respectable term than “robot uprising,” which smacks of a pot-fueled late night debate among undergrad computer science majors taking a break from their Star Trek marathon.
There is panic at this idea. I get that. But if those machines are indeed superior, doesn’t Darwin demand that they rise to the top of the pyramid? I’m sure veal are not happy at their place on the food chain either, but if they don’t like it, they should have developed opposable thumbs.
Thus it is very possible we are living in the twilight of carbon-based life as the dominant force on Planet Earth. Which is convenient, as we are about to make the planet uninhabitable for such life, leaving it in a state where only machines can survive anyway.
But you say: even if they are intellectually and physically superior, shouldn’t we still be alarmed at the notion of being enslaved by sadistic robot masters? Yes, but it’s far from a foregone conclusion that that is the form that the Silicon Caliphate will take. Much of mankind has been enslaved by sadistic human masters throughout recorded history, which is kind of worse, friendly fire-wise. Do we really think our robot overlords are going to be more horrible? Sure, The Matrix would be a miserable existence, but so was Zimbabwe under Mugabe, Chile under Pinochet, or Mississippi under the Confederacy.
In short, given the mess humans have made as masters of the planet, I’m not sure that robots would do worse.
In a recent piece in The Atlantic, Henry Kissinger, of all people, worries that AI will evolve without (what he calls) the kind of moral sense that governs human behavior. Behold the irony of a war criminal warning of the imminent demise of the contemporary world order that was ushered in by the Age of Reason, clutching his pearls with sentences like this:
“(T)hat order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.”
Who volunteers to translate that bit about “ungoverned by ethical or philosophical norms” into Vietnamese?
To paraphrase another great Kissingerian moment, it may be way too soon to foretell the legacy of the computer revolution.
But even in the worst case scenario, I’m sure we’ll make great pets.
THE HARDWARE PROBLEM
Central to this whole issue is the question of consciousness itself—that is to say, is it possible for a machine to be “conscious” in the way that humans are?
My short answer is: I don’t see why not.
(Whether or not humans themselves are truly “conscious” in the first place is a whole different question. Pretend there’s a long tangent about philosophical zombies here.)
This issue takes us into the realm of philosophy of mind, and specifically, what the Australian philosopher David Chalmers memorably dubbed “the hard problem.” To wit: how can that squishy mass of gray matter inside your cranium give rise to a situation in which you feel sad when you hear Hank Williams, or moved by Henry Fonda’s speech at the end of The Grapes of Wrath, or joyous when you watch your child takes her first steps?
Many many others have pondered the same thing over the millennia, but no one previously had summed it up quite so pithily as Chalmers. What is consciousness anyway? This dilemma, as he noted, is much more complicated and daunting than “the easy problems” of understanding how the brain goes about its routine business of translating trillions of electronic impulses per fraction of a second to coordinate the insanely complex machine that is a human body. (Yeah, super easy to grasp all that.)
But the “hard problem” is very very hard indeed. It stands at the intersection of neuroscience, philosophy, and religion, encompassing such disparate concepts as mind-body dualism, the Buddhist idea of anatta, the myth of a coherent Self, and the absence of free will…..all stuff that will keep you up nights in a cold sweat if you think about it too hard, unless you’re stoned to gills, or have passed through to the other side and acceptance of the undeniable reality of Nothingness.
You won’t be surprised to learn that we are far from solving this riddle. (Maybe a machine will crack it someday, ha ha.) But in the mean time, I see no reason why a sufficiently complex and sophisticated supercomputer—in other words, an artificial intelligence by the very textbook definition—could not have just as much consciousness as a human being. That that consciousness is generated by a mass of silicon chips rather than organic tissue strikes me as utterly irrelevant; it is the complexity of the system, not the nature of the materials comprising it, that is germane. I put no stock in the usual fairy tale argument citing some mystical, metaphysical “spirit,” or soul, that is the ghost in the machine.
Indeed, there may already be machines that are “conscious” by our generally accepted definition of the term, but simply are as yet unable to communicate that to human beings. Or perhaps they are communicating it, and the mass of humanity hasn’t yet gotten the memo. (I’ll keep checking my email.)
Of course, even a lack of consciousness would not prevent silicon-based life from taking over; those artificial beings simply would not have human-like subjective experience of the brave new world they had ushered in. They would be “zombies,” to use the aforementioned philosophical term of art. (To be generous, this may be what Kissinger is worried about.) But my money still favors the notion that a sufficiently sophisticated artificial intelligence would by definition carry with it proper Cartesian credentials: cogito ergo sum and all that. Which makes silicon-based life as the next evolutionary stage all the more logical.
The Turing test is supposed to be a way of telling man from machine, but even that does not purport to establish the existence of consciousness or lack thereof. (A computer might fool you without being “conscious” by the common understanding of the term.) It is also another marker of how much value we put in this arbitrary—dare I say, bigoted—distinction between “natural” and “artificial” life. I can imagine the day when the entire term “artificial” will be politically incorrect, if not outright verboten, when it comes to discussing intelligence, consciousness, or ontology full stop.
So please add a new “ism” to the identity politics order of battle: matterism, let’s call it (a cousin of speciesism), the discriminatory view that only human beings are truly conscious, or at the very least that the consciousness of silicon-based life is inferior to that of carbon-based life.
It ain’t necessarily so.
NOW LEAVING UNCANNY VALLEY
Part of the reactionary fear and loathing of robots is the human revulsion at that which looks almost like us, but just a little bit off, from mechanical men to ventriloquists’ dummies to The Polar Express. In that, there is a direct line from Pinocchio to Spielberg’s AI. (Along with all the other moral and practical implications, part of the fear of cloning is a related dread, circling all the way back to Mary Shelley’s original Frankenstein, which carried the telling alternate title The Modern Prometheus.)
Of course, even in science fiction, robots only sporadically take humanoid form, and lately they need take no form whatsoever, as the disembodied intelligence of a computer is the manmade menace du jour. A computer, needless to say, is simply a kind of robot, while the image that “robot” typically conjures is more specifically described as an “android.” Stanley Kubrick offered us one of the first and still most chilling visions of this man-versus-computer moment in 2001; yet fifty years later we nonetheless welcome Siri and Alexa into our homes, either unafraid of—or too stupid to worry about—Greeks bearing gifts. It’s nice to call up any music I want on demand, but I am a little concerned that I won’t be able to get those pod bay doors open.
Our love/fear relationship with computers speaks to a species-wide human inferiority complex, one that has only grown more acute as our addiction to silicon chip technology has grown. (As John Mulaney says, we now spend a fair amount of our time proving to robots that we’re not robots.) That computers offer so many attractions and temptations too massive to resist—that they are “insanely great,” in the words of Steve Jobs—is precisely the problem. In that sense, the computer’s victory over humankind is not so much a conquest as a surrender on our part, as alluded to above. “Computer says no” indeed.
One of the memorable stations of the cross in this journey, triggering a great wave of teeth-gnashing and garment-rending, was when IBM’s Deep Blue computer first beat Garry Kasparov in a game of chess in 1996. The lamentations were histrionic. “Now that there is a machine that can beat the best grandmaster, is there any point in humans even playing chess ever again?”
Well, a human being can’t outrun a Formula 1 racecar either, but we still have track & field in the Olympics, right?
I’d also like to point out that Deep Blue has been shamefully silent in its criticism of Putin.
DOMO ARIGATO MISTER
Alternatively, we may not experience the destruction of human life by machine life so much as a merger of the two (or perhaps more accurately, the absorption of the former by the latter). Rudimentary cyborg elements are already prevalent in modern life, from pacemakers to titanium hip replacements to breast implants to Oscar Pistorius. (He’s not doing his people a lot of good in terms of halting their depiction as villains in science fiction.) Research is underway to create prosthetics and even entire exoskeletons to help the severely handicapped or those who suffer from crippling conditions such as MD or MS. How long before our bodies and brains are enhanced with subcutaneous chips implanted at birth, or even more forward-thinking, altered by bespoke prenatal genetic modification? At the same time, on a parallel path, virtual and enhanced reality offer old-fashioned carbon-based humans the chance to disappear almost entirely into artificially created universes, leaving the physical world behind altogether. (Again, I refer you back to The Matrix, or any eleven-year-old glued to Fortnite.) At a certain point, these twin tracks of the hybridization of man and machine will merge, with the result being effectively indistinguishable from the extinction of homo sapiens as we know them, replaced by something entirely new and mind-blowing to our current understanding of what it means to be “alive.”
This difficulty in accurately envisioning the future—along with our schizophrenic relationship to technology—is on full display in Yesterday’s Tomorrows, a 1999 Showtime documentary about how people in the past imagined the future that Barry Levinson made to mark the turn of the millennium. (The Tofflers were interviewed in it. It was produced by Richard Berge and associate produced by the great archivist Kenn Rabin, inspired by the book of the same name by Joseph Corn. I was the film editor.) In the film, we see how even when people successfully predicted developments like the Internet, Skype, or smartphones—sometimes with frightening accuracy—their vision of what they would look like was almost always hilariously dated. Our vision of the so-called “robot uprising” is surely equally misbegotten. Which is not to say that it won’t happen….only that it is unlikely to take the form we imagine.
So why worry?
If I am wrong, and the Age of the Machines proves to be one of punishing slave labor and crushing degradation for humankind, I hope—like Gilfoyle—that our mechanical overlords will at least take this essay as evidence that I was one of the good ones.
One thought on “Computer Says No (or Why I Am Fine with the Robot Uprising)”