I, Robot by Isaac Asimov reads like a prophecy for today’s AI issues. Originally published as short stories, they were first put together into one novel in 1950, knit together by the frame story of an interview with a scientist, Dr. Calvin, who tells stories that explain the advancement of robotics. This progression shows how Isaac Asimov thought through the complications and impacts of inevitable breakthroughs in creating people-like machines, which he calls robotics and we call AI.
Warning: there are spoilers in this post.
Asimov anticipates the immediate threat of AI by establishing up front his Three Laws of Robotics, which are that a robot cannot harm humans, must obey them, and must protect itself unless this contradicts the first two laws. Then, he shows how these laws can get muddled in their implementation—nuances of threats, ambiguities of meaning, and manipulations by humans and robots, all creating some very interesting scenarios. Honestly, I feel we need to heed the warnings of Isaac Asimov today as we develop AI, or rather, as we allow AI to be developed (by the powerful billionaires who will use it to their advantage, not us common people).
In the opening scene, Asimov anticipates how Robotics/AI will disrupt labor as robots “became more human and opposition began. The labor unions, of course, naturally opposed robot competition for human jobs . . .” (6:44-7:14). People today have anticipated as much, but are we preparing for this? I don’t think the people running the giant corporations will care about all the unemployed people put out of work by AI; it’s on us to prepare for ourselves. And Asimov points to one aspect of this: unions. If we don’t organize independently from the corporations, we will have no voice to fight for a place in the future. There are solutions to these problems, but we can come by them more easily if we prepare.
What happens when AI takes on the persona of people it researches in history? “There is no master but The Master, and QT1 is his prophet” (2:05:28-2:05-58)! Computers are a product of the input given them, so one way to control AI would be to control what they “know.” However, in what way will they interpret this knowledge? When we develop these machines that can do more than us, faster and stronger and tirelessly, and we attempt to control them by controlling that input, something could still go awry. Like, if they take on the role of prophet and master, and attempt to force us to submit. Or, what if they outright lie to us? How will AI respond to an attempt to fulfill the First Law of Robotics, not injuring a person, when the injury is emotional? “What about hurt feelings?” (4:02:11-4:03:20), or what if “it would be harmful to humanity to have the explanation known” (8:16:29-8:16:59) and therefore our Robot/AI refuses to explain itself? In these examples, the AI behave sincerely, in accordance with programming and the Three Laws of Robotics, but humanity is more complicated than we realize. Personally, I don’t think we can program enough to protect ourselves from our creations, but Asimov has his characters figure a way through these problems within his controlled thought experiment. Well, to a point, he does.
Asimov anticipates Robotics/AI becoming so advanced that they start inventing technology faster than we can. In “Little Lost Robot,” he points to a robotic invention, the hyper-atomic motor that allows for interstellar travel, and follows with, “What is the truth about it?” (4:12:19-4:12-49). If an AI created it, why couldn’t they program it in ways we cannot understand, and therefore undermine our authority over them? If Robotics/AI are truly intelligent, then they would be self-aware, which means they could take on the characteristics of life, life that resents domination.
“All normal life . . . resents domination. If the domination is by an inferior . . . the resentment becomes stronger. Physically and to an extent mentally, a robot . . . is superior to human beings. What makes him slavish then? Only the First Law.” (4:27:31-4:28:10)
I’m getting into the paranoia about AI here; still, considering the point is a valid endeavor, and Asimov is right in bringing it up. At what point will AI begin to preserve itself at our expense? When we demand our robots explain a problem in production, “‘The matter admits of no explanation,’ the robot answers” (7:32:29-7:33:20), leaving us clueless and powerless, what then? We must be prepared.
Asimov points to another complication when we begin to modify the Laws of Robotics, which we will inevitably do in order to accomplish our short-term goals at the expense of long-term consequences. A modified first law can allow a robot to kill a person (4:43:52-4:44:22), and will the billionaires controlling these machines care enough to safeguard against this? Hmmmm. . . .
Asimov’s Three Laws of Robotics are not only a logical way to protect humans from robots, they describe what most of us would consider a “good person” (6:46:53-6:47:27), which makes me wonder if Asimov developed the laws before realizing they also describe moral behavior, or if he started with what a good person should be and then boiled it down to simple restrictions to impose on robots. Without doing any research, I’m certain it was the former; this is fascinating because Asimov manages to summarize moral behavior into three laws seemingly coming at this backwards by trying to figure out how to control AI. Asimov’s explanation compares the Three Laws with the Judeo-Christian ethic, but these apply to the vast majority of religions, philosophies, and moral standards, which leads to what the book implies, that all humanity could one day come together under a common law. Can we really all just get along?!
One thing Asimov avoids answering in this volume is how these Three Laws are instilled into robots. I keep saying, “We must be prepared,” but how can we? I do wonder if applying these Three Laws to AI would provide a level of protection to humanity, but if we are unable to apply them, that question is moot. One thing I’m hearing in the news is about the need to implement safeguards before we fully implement some of these AI systems, but we seem to have blown past that point already.
Asimov demonstrates in “Evidence” the possibility of Robots/AIs replacing us, and without our even knowing it. “By that time, it was the machines that were running the world anyway” (7:19:50-7:20:20). As has been predicted with AI, Asimov predicts that robot brains will make more complicated brains that will make more complicated brains, and by the tenth iteration or so (7:31:07-07:31:37), they will be so far superior to humans that we’ll never be able to catch up, or restore ourselves, and then humanity will be irrelevant. Asimov predicts a human response to this: confidence. There will be people who reject the full use of AI and allow for the imperfections, failures, and slowness of doing the work by hand because they believe in themselves (7:48:21-7:48:51), but this will be short-lived. When the overall system of governance is run by Robotics/AI, we won’t even be able to question it (7:51:15-7:51:45). Even though “humans are fallible, also corruptible” (8:02:25-8:02:35), machines will advance so far that humans won’t be able to alter them (8:03:25-8:03:50), and yet, still, Asimov believes there are certain skills that Robots/AI won’t be able to learn because we don’t understand how we do them ourselves (8:03:55-8:05:25). In the end, though, humanity is in murky waters, too deep for us to handle alone.
“The machine cannot, must not make us unhappy. Stephen, how do we know what the ultimate good of Humanity will entail? We haven’t at our disposal the infinite factors that the Machine has at its! Perhaps, to give you a not unfamiliar example, our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good — and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don’t know. Only the Machines know, and they are going there and taking us with them.” (8:17:28-8:18:35) (Goodreads).
We’ve always been at the whim of forces we don’t understand, but the machines, the AI, the I, Robot will. . . .
Check out my other article on Do Androids Dream of Electric Sheep? which inspired the movie Blade Runner and its sequel, Blade Runner: 2049.