|
QUOTES
"By the time I was in my late teens and already a hardened science fiction reader, I had read many robot stories and found that they fell into two classes.
"In the first class there was Robot-as-Menace. I don't have to explain that overmuch. Such stories were a mixture of 'clank-clank' and aarghh' and 'There are some things man was not meant to know.' After a while they palled dreadfully and I couldn't stand them.
"In the second class (a much smaller one) there was Robot-as-Pathos. In such stories the robots were lovable and were usually put upon by cruel human beings. These charmed me. In late 1938 two such stories hit the stands that particularly impressed me. One was a short story by Eando Biner entitled 'I, Robot' about a saintly robot named Adam Link; another was a story by Lester del Rey, entitled 'Helen O'Loy' that touched me with its portrayal of a robot that was everything a loyal wife should be.
"When, therefore, on June 10, 1939 (yes, I do keep meticulous records), I sat down to write my first robot story, there was no question that I fully intended to write a Robot-as-Pathos story. I wrote 'Robbie,' about a robot nurse and a little girl and love and a prejudiced mother and a weak father and a broken heart and a tearful reunion. (It originally appeared under the title - one I hated - of 'Strange Playfellow.')
"But something odd happened as I wrote this first story. I managed to get a dim vision of a robot as neither Menace nor Pathos. I began to think of robots as industrial products built by matter-of-fact engineers. They were built with safety features so they weren't menaces, and they were fashioned for certain jobs so that no pathos was necessarily involved.
"As I continued to write robot stories, this notion of carefully engineered industrial robots permeated my stories more and more until the whole character of robot stories in serious printed science fiction changed - not only that of my own stories, but of just about everybody's.
"That made me feel good and for many years, decades even, I went about freely admitting that I was 'the father of the modern robot story.'"
- Introduction, Asimov
"The Complete Robot"
"In 1950 he (Alan Turing) derived the Turing test for machine intelligence. Essentially this would involve a human being talking, probably via a keyboard, to a machine and to another human. If after a reasonable period of time, he could not tell the difference, then we would be obliged to credit the machine with intelligence. This is a morally significant assumption. It means we do not need evidence of the machine's "inner mind", we can work on the basis of only what it does. Normally I credit other people with minds because I believe I have one. But this is a statement of faith. The only truly scientific test is behaviour, and so a machine that passes Turing's test must be regarded as intelligent and presumably therefore, a moral agent. there is no evidence of inferiority"
"A recent episode of Star Trek covered this ground rather elegantly by staging a trial to establish whether the android Data had rights. Significantly, the baddie was the man who said he hadn't and the goodie - Captain Picard - was the one who said he had. There is an odd reversal of values going on here, in which the devotedly liberal Picard wants to extend morality to machines. Once he would have been Aldous Huxley or G K Chesterton, attacking the mindlessness of technocratic culture. Now he is a starship captain who sees that androids have feelings too."
-Bryan Appleyard
MAIN TEXTS
Asimov's Rules Of Robotics (on-site link)
"Supertoys..." Aldiss (Short story basis for the film "A.I.") PDF
"Neuromancer" Gibson (Link to Amazon.com)
"Bladerunner" script pdf (local video/DVD store)
RELATED MATERIAL
"2001: A Space Odyssey" script pdf (film or book version)
"Do Androids Dream Of Electric Sheep?" Dick
"I, Robot" Asimov
"The Bicentennial Man" Asimov
DISCUSSION/ESSAY TOPICS (Please post comments on the related thread on the discussion board)
1. Asimov views robots as benevolent, but does not see their existence as beneficial to man. How does he resolve this paradox?
2. To what extent do robots like C3P0 or Data reflect human desires to "tame" intelligence?
3. Collect and list examples of "the machines take over/run out of control" stories. What do they have in common, and how do they differ?
4. HAL 9000, in the film version of "2001: A Space Odyssey", acts irrational and eventually commits murder. Do you think this homicidal streak was an flaw in his program from the start of the mission, or part of the monolith's influence? What argument can you present to support your opinion?
|
|