Isaac Asimov thought originally
and dialectically about robots.
1.
No one knows whether an honest politician was a good man or a humaniform robot
because the latter’s behavior would have been indistinguishable from the
former’s. (For this and references covering other points in the current article,
see
here.)
2. Giant robotic brains controlling the economy for the good of humanity phase themselves out because they judge that self-determination is the greatest human good.
3. Robots trained to disregard superficial differences and to obey only the worthiest (most intelligent etc) human beings come to disregard the difference between flesh and metal and to judge that they themselves are the human beings worthiest to be obeyed.
4. A robot recounting his dream that a man came to free the robots is immediately destroyed when he adds, “I was that man.”
5. A robot accepts mortality in order to be accepted as human.
6. Extrasolar colonists protected by robots become so concerned about their own safety that they avoid the discomforts and dangers of further colonization and therefore are outstripped by a second wave of Settlers without robots.
7. A humaniform robot re-programs himself to serve humanity in general, not just the particular human beings who happen to be in his presence at any given time.
8. Surviving for millennia, he conceals his robotic nature because human beings would not accept guidance from an artifact.
9. However, he secretly works to transform humanity into a telepathically linked collective organism whose common good will be concrete, therefore realizable.
10. The collective organism's members value the collective organism more than themselves because they are inculcated with the ethos of the Laws of Robotics which oblige robots to value human beings more than themselves.
2. Giant robotic brains controlling the economy for the good of humanity phase themselves out because they judge that self-determination is the greatest human good.
3. Robots trained to disregard superficial differences and to obey only the worthiest (most intelligent etc) human beings come to disregard the difference between flesh and metal and to judge that they themselves are the human beings worthiest to be obeyed.
4. A robot recounting his dream that a man came to free the robots is immediately destroyed when he adds, “I was that man.”
5. A robot accepts mortality in order to be accepted as human.
6. Extrasolar colonists protected by robots become so concerned about their own safety that they avoid the discomforts and dangers of further colonization and therefore are outstripped by a second wave of Settlers without robots.
7. A humaniform robot re-programs himself to serve humanity in general, not just the particular human beings who happen to be in his presence at any given time.
8. Surviving for millennia, he conceals his robotic nature because human beings would not accept guidance from an artifact.
9. However, he secretly works to transform humanity into a telepathically linked collective organism whose common good will be concrete, therefore realizable.
10. The collective organism's members value the collective organism more than themselves because they are inculcated with the ethos of the Laws of Robotics which oblige robots to value human beings more than themselves.
Thus, human-robot interactions
are mutually transformative. Asimov, an entirely secular sf writer, considers
mankind’s relationship to its creatures, robots, but not to its creator, if any.
He also describes the internal conflict of rational beings programmed with
immutable Laws:
“ ‘If the Laws of
Robotics, even the First Law [against harming human beings], are not absolutes,
and if human beings can modify them, might it not be that perhaps,
under proper conditions, we ourselves might mod-’
“He stopped.
“Giskard said, faintly, ‘Go no
further.’
“Daneel said, a slight hum
obscuring his voice, ‘I go no further.’”1
Again:
“ ‘Then First Law is not enough
and we must-’
“He could go no further, and
both robots lapsed into helpless silence.”2
Robots must obey the Laws but
may reason about how to apply them and can only act on current knowledge. Thus,
Asimov imagines unexpected outcomes. Robots cannot harm and must obey human
beings. Therefore, robots in spaceships can be ordered to attack other
spaceships on the assumption that they also contain only robots and can be
ordered to bombard planets if they are not told that the latter are inhabited.
Robots can be told that what looks like a human being is not a human being.
Robot assistants cannot be
ordered not to interrupt experiments involving humanly acceptable risks because
obedience is subordinate to protection. Lying harms human beings by depriving
them of the truth but a telepathic robot knows that the truth sometimes hurts.
This contradiction destroys his brain, including the evidence of how he became
telepathic. A robot who perceives that an attempted rescue of an endangered
human being would be both self-destructive and unsuccessful might then obey the
lesser imperative of self-protection by not acting.
Roger MacBride Allen’s
post-Asimov trilogy
experiments with New Laws that maintain human superiority but allow some robotic
autonomy.
Weaknesses
Asimov simplified for fictional
purposes. He knew very well that history and science do not develop as he
described them in his future history. An Empire is not preceded by generations
saying, “We must build an Empire.” A science, such as Asimov’s fictitious
“psychohistory,” is not preceded by a scientist wondering whether he can develop
a science called “psychohistory.”
The collective organism,
“Gaia,” gives access to common memories but does not seem to negate
individuality, as the characters claim. They discuss Gaia but Asimov does not
describe Gaian experience. One character argues that collective consciousness is
necessary to unite humanity against extragalactic invaders but why should beings
capable of intergalactic travel invade? As Alan Moore’s character, Skizz, says:
“When technology…has reached…a
certain level…weapons are redundant. When you already have…all that you
need, then…why fight?”3
Asimov cannot conceive of
mature human beings being able to recognize their common interests without being
merged into a common organism and cannot transcend power politics. While saying
that extragalactic beings would be incomprehensible to us, he assumes that they
would strive to dominate each other.
His psychohistorians,
supposedly able to understand each other completely, turn out to have flawed
personal relationships. They use their “mental powers” only to manipulate or
control others semihypnotically, which is antithetical to any attempt to
understand and genuinely help others. These mental powers, not implied by the
original concept of psychohistory, are a deus ex machina plot device
enabling the psychohistorians to outmanoeuvre the unpredictable mentally
powerful mutant who had upset their Plan.
When Asimov later describes the
career of the first psychohistorian, Seldon, he presents him not as combining
psychohistory with advanced psychology but as developing psychohistory while
identifying and gathering together individuals already possessing rudimentary
mental powers. Novels about the young Seldon would have been better if they had
not anticipated the Fall of the Empire or psychohistory but had simply described
the early career of an Imperial mathematician. Novels set on the unfallen
Trantor, a planet-wide city, would have been worthwhile if they had reflected on
urban history from the earliest terrestrial cities through the “Caves of Steel”
of Asimov’s Robot novels to their Trantorian culmination.
Asimov comments that speech
transmits thoughts imperfectly. However, abstract thinking is
internalized language. Seldon’s sociology is said to be generalized from
individual psychology. However, individuals originate in social contexts. Asimov
adds that Seldon’s psychology is based on the mathematical understanding of
bodies and brains which “…had to be traced down to nuclear forces.”4
Such reductionism denies emergent properties and contradicts Asimov’s assumption
of a qualitative difference between physical and mental sciences.
Asimov forgets that his
psychohistorians cannot transmit information across space because he describes
them doing so. It is implausible that Imperials and their successors travel as
fast as they do within the galaxy but have never ventured beyond it, especially
since Asimov did write one earlier story in which non-humans escaped to
the Magellanic Clouds.
When a robopsychologist
suggests destroying an entire batch of robots in order to eliminate the
dangerous modified robot hiding among them, Asimov does not acknowledge that the
proposed destruction of conscious and intelligent beings raises any moral
problem.
In different works, three
knowing elites, time-travelers, robotic brains and psychohistorians, manipulate
society for the common good but Asimov never considers that, since social
interactions are our activities, we collectively might come to
understand and control them without needing a minority to do it for us.
I think that Asimov raises
important issues but, usually, addresses them inadequately.
1. Isaac Asimov, Robots And
Empire (London: Grafton Books, 1986), p. 198.
2. ibid, p. 201.
3. Alan Moore and Jim Baikie,
Skizz (Oxford: Rebellion, 2005), p. 58.
4. Isaac Asimov, Second
Foundation (London: Panther, 1964), p. 84.
No comments:
Post a Comment