Thursday, May 27, 2010

Different than the Sum of Our Programming

I've been spending a little time blogging about how mechanization and cybernetics have been affecting human beings. In Misfit, I suggest that two "robotic" DC comic characters Cyborg (from the Teen Titans) and the Doom Patrol's Robotman, although enabled as "superheroes" by their prosthetic bodies, are also alienated from the humanity they seek to serve. In my more recent blog A Human Heart and Courage, I point to the fact that, regardless of how much of our bodies are made up of artificial parts; our humanity transcends our physicalness and inhabits perhaps a more ethereal space. We are more or at least different than the sum of our parts.

Edward Page Mitchell's short story The Ablest Man in the World was published in 1879 and recounts the tale of a man who has a computer implanted in his head, causing him to become a genius. If we can tie into every other body part and organ with cybernetics, will we one day really be able to augment the human brain with computer chips?

That's pretty far out, but consider that in some ways, people are already "programmed" with a set of instructions that determines our perceptions, responses, and behavior. These instructions vary from person to person, based on how they were raised, their experiences, and how that input was encoded and given "meaning" for the person. I was about to say that, as we develop, we begin to think independently, but anyone who has been around a two year old, realizes that people are very independent at the beginning and that our programming provides the framework for complying (to one degree or another) with society's behavioral expectations and the expectations of our particular family, peer, and associative groups.

In science fiction, we often attempt to impose how people learn onto robotic and computer programming. Consider the Star Trek original series episode The Ultimate Computer. Written by Laurence Wolf and D.C. Fontana, the story relates how the Enterprise is reduced to a skeleton crew and has a revolutionary computer device called the M5 installed in engineering for testing, as ordered by Star Fleet. The M5 is supposed to be able to perform most of the functions that normally require a human(oid) crew aboard a starship. Kirk argues that the one thing it can't do is make value judgments. The M5 can't actually "think"...or can it?

It's creator, Dr. Richard Daystrom, the scientist who originally invented the computer systems currently used aboard Federation starships, says that he has developed a "whole new approach" which solves the "thinking" problem, by imprinting his own memory engrams on the computer circuits, giving the M5 what amounts to a personality.

This is a very, very old story in science fiction with an old conclusion. Turns out that Daystrom is an unstable personality and that mental instability was transferred to the M5. When the computer mistakes a series of war game exercises as a real attack and begins destroying starships and killing people, both Daystrom and the M5 are "unplugged" by Kirk, "proving" that people are ultimately superior to machines.

Comic book lore of the same era addresses the same issue of over dependence on machines at the cost of human liberty and autonomy. Magnus, Robot Fighter: 4000 AD is the story of a man, raised in a hidden base under the Antarctic by a benign and wise Robot named 1A to become the guardian for humanity, both philosophically and physically, and to defend against Robot overdependence among humanity and any physical robotic threats against mankind. Magnus becomes the ultimate man and role model, trained to mental and physical perfection...almost a Messiah-like figure who can beat up robots with his bare hands and who continually warns the people around him that they'll lose their uniqueness as people if they keep letting robots "take care" of them (think WALL-E). Jack Williamson's novel The Humanoids conveys the same essential message.

Are people superior to machines in our "programming"? Do we behave better or more "morally" than a robot would if we could program a robot to a human level of complexity? Isaac Asimov asked that question in one of his short stories in the I Robot collection called Evidence. At some future date, Attorney Stephen Byerley is running for the office of Mayor of New York. His opponent Francis Quinn levels a rather odd accusation against him. Quinn says that Byerley is a robot created in human form, with an outer shell of human flesh, much like a Terminator. Think about it.

Asimov's classic three laws of robots state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


If you tweak the wording of the laws just a little bit (and they have been so you don't have to reinvent the wheel), the three laws describe the behavior of a reasonably moral human being. If people were "programmed" with the three laws, how would our behavior be different? In Asimov's story, Byerley manages to prove he's a human being be punching a heckler during a speech, something a robot would not be able to do, but to the end of the story, it's always uncertain whether Byerley is really human or a cleverly disguised robot (Byerley could have arranged for the heckler to be another humanoid robot, enabling him to hit the heckler without breaking the first law).

I'm not suggesting that we program human beings as we would machines. That story too has been told in another Star Trek original series story What Are Little Girls Made Of?, written by Robert Bloch. Another, "free will humans are better than programmable humanoid android" story. Yet, what if we would choose to structure our lives around something like "the three laws"?

I seem to be focusing a lot of the works of Gene Roddenberry because one obvious example of a "machine who would be man" leading the way to more "human" behavior is Lt. Commander Data from the Star Trek the Next Generation series. More than once in STTNG episodes, Data refers to himself as having been programmed with the three laws and that his basic functioning depends on those laws.

Data is actually based on a failed TV pilot written by Roddenberry called The Questor Tapes (1974). Robert Foxworth plays the android Questor, designed and built by a brilliant scientist named Vaslovik. Vaslovik has disappeared, but a team of scientists, including Vaslovik's protégé Jerry Robinson (played by Mike Farrell, best known as B.J. Hunnicut on TV's M*A*S*H series), attempt to finish Vaslovik's work by programming Questor.

The Vaslovik programming tapes were damaged when project head Geoffrey Darrow (John Vernon) attempted to have them analyzed so when Questor eventually is activated (after everyone else has gone home for the night), he has all of his intellectual capacities but, like Data, has no emotional awareness. Like Data, Questor also has a compelling drive to understand the people and world around him and a need to help human beings. Although unstated, it's likely that Questor is "three-laws" compliant based on his actions.

Questor, Vaslovik, and a long series of androids before them, as the audience discovers during the film, were placed on Earth by an alien race to help guide humanity into maturity and prepare us to join the interstellar community of intelligent races. Vaslovik had to take himself out of service before activating Questor to replace him, due to exposure to contaminants produced by modern technology. Questor is to be the last in the series, with a "lifespan" of 200 years. Since he is without the emotions he was intended to possess, Robinson joins Questor as his "emotional mentor" on his mission to covertly guide selected people to become teachers and other leaders, gently shepherding human society into a more peaceful existence.

In real life, we probably can't depend on some wise alien species coming to Earth and either overtly or secretly giving us a hand and helping us not to be jerks on a planetary scale. However, our science fiction and fantasy stories do possess the hint of an answer to all of the problems people seem to create.

While people can't be programmed the way machines can, we have the ability to learn from our experiences and to make a few right decisions. If we don't know what those decisions should be, we could potentially point to the Asimov three laws as a foundation. Rather than saying that human beings are somehow better than our fictional robots, maybe we should let them be our guides, at least metaphorically. If we acted like the robotic guides in our fantasies, maybe reality and humanity would be a bit more livable.

The thing is, unlike a computer, or a person with plug-ins such as Neo in The Matrix, we can't depend on some outside force inserting a device into our heads and instantaneously giving us what we need to know, including the will to obey the instructions provided. We also can't depend on blindly obeying what others have taught us "robotic-like", but rather, must exceed our "programming". We have to think and decide for ourselves what do to and then have the courage to do it.

Epilogue: I got the last image from the typepad.com blog. Seemed a fitting description of how human beings act in real life.


Share/Bookmark


No comments:

Post a Comment