May, 2011: God’s Imprint
Imagine you are a mad scientist in the not too distant future. Imagine you have designed an extremely advanced robot, one with mental and physical capabilities far greater than humans, but with pleasant human-like features so as not to be alarming at first sight. The robot is also extremely dexterous and mobile. You have named him Roy.
Roy is fully capable of assimilating into human culture, but only in a subservient manner. He could be a maid, butler, cook, chauffeur, nanny, bodyguard and the like, but not a teacher or CEO or a military officer.
Roy is highly cognitive – way smarter than most humans. But, being a mere robot, he will have to be told what to do and when to do it. He will also have to be told what not to do. So, before you turn Roy loose, you must program him to do his job and nothing more and nothing less. This is not as easy as it sounds.
Consider what happened in the movie “Space Odyssey, 2001″ released in 1975. In Part I, a large monolithic obelisk is discovered by primitive apes. In Part II, millions of years later, a second similar obelisk is discovered on the moon. The obelisk seems to be beaming a mysterious signal towards the planet Jupiter. Earthlings, being ever curious, decide to send a large manned spaceship to Jupiter to find out what the heck is going on.
One of the characters aboard the transport is a brilliant computer, named Hal. If you’ve seen the movie, you will remember that Hal totally wigged out along the way. Hal began his rampage by killing off several passengers, after which he killed one of the two astronauts aboard. Finally, Hal turned his evil intentions to the ship captain, Dave. But, Dave was onto Hal and was going to put up a fight. He entered the computer’s power bank and began pulling out Hal’s energy pods. Hal got weaker and weaker and finally, sensing his imminent demise, implored Dave to quit. In a deep, drawn out voice Hal said, “Dave … I’m feeling … much better … now. Please … stop.”
When the movie was over, you probably hated Hal. But, in the sequel, “Space Odyssey, 2010,” it was revealed what really happened. Poor Hal wasn’t evil after all. He had been programmed with contradictory instructions. On the one hand, Hal was told the mission to Jupiter had to be protected at all costs. On the other hand, Hal was told that the astronauts were in charge and had to be obeyed. As the mission wore on Hal became more and more confused and finally decided that the humans aboard were imperiling the mission. So, Hal decided to eliminate them. A perfectly understandable response considering Hal’s predicament.
The plot illuminates the dilemma of programming highly cognitive robots. A human being, such as Dave, could have easily reconciled the seemingly contradictory information.
So, back to the original problem. How are you going to program Roy so that he doesn’t end up like Hal? Well, you could start by installing a database of every law, ordinance and regulation – federal, state and local – on the books. Then instruct Roy to never violate any of them. You must also instruct Roy to never, under any circumstances, harm any human being.
Will Roy be able to function? Remember, when given an instruction by his owner, Roy will have to scan his database to make sure that the instruction doesn’t violate any of the hundreds of thousands of laws, ordinances and regulations – many of them somewhat if not outright contradictory. And what will Roy be programmed to do if he encounters a dilemma?
What if someone has purchased Roy as a bodyguard and one evening while Roy and he are enjoying an evening stroll through their neighborhood, the owner is attacked by hoodlums. They knock him down and beat him mercilessly. He shouts out to Roy to kill the sons of bitches. But Roy can’t. His programming won’t allow it. So, Roy just stands there and watches his owner die.
Clearly, Roy will have to be re-programmed so that under certain conditions he can indeed harm or kill humans. But, what will these circumstances be? Another vast array of possible scenarios will have to be entered into Roy’s database.
By now, you undoubtedly begin to see the magnitude of the problem of programming a highly functioning robot.
Now, forget about Roy and consider yourself. How in the world do you function in today’s world? There are a zillion laws, ordinances and regulations governing your behavior. Suppose you find yourself in an unpleasant or even perilous situation from which you must make an immediate response. Are you going to go through the entire database of laws, ordinances and regulations to determine your proper response? No, of course not. Your brain is not fast enough and, even if it was, you would likely encounter contradictory information. You’ll be there all day. Humans couldn’t function this way.
So, how do humans function? It’s pretty simple. It’s God. You see, God imprinted the human brain with an innate sense of right and wrong. We all know what’s right and what’s wrong. We don’t need a stack of law books to tell us how to act. God also gave us free will, so that even though we know right from wrong, we are free to choose either. And, unfortunately, a small percentage of humans choose to do wrong on levels ranging from fairly innocuous to incredibly evil. And, unfortunately, this small percentage of humans make life much more difficult and painful for the rest of us.