This is another one that feels like a bit of a cheat. Similarly to Runaround, where the Third Law had been tweaked up to help survive the hostile environment, in this one the First Law has been tweaked down to prevent the robots interfering with the work the humans are doing by rushing in and trying to save them from the low levels of radiation that their work entails.
Well, that’s just a great idea, isn’t it? What could possibly go wrong with that?
As it turns out these robots are well aware of their superiority and this makes them jolly annoying colleagues for the humans they work with. When a human finally loses his temper and tells one to ‘go lose yourself’, it disappears among the identical but non-modified robots. Susan Calvin is called in to try and sort out which of the robots is the potential killer. The search for the impostor raises an important question for me: don’t these robots have individual serial numbers?
It is incredible - literally, unbelievable - that the robot deal is so frangible. No wonder they're always shouting at each other - they must live in constant stress waiting for the inevitable robot freak out.
Each positronic brain appears to be more or less a black box. Once it’s started there’s no telling what experience with the outside world might do. And once it’s started, they can’t take it apart and identify problems by looking at the hardware (and there’s no mention of software here – these stories were clearly written in the infancy of computer science; in fact, the stories make a distinction between computers and robots that we probably wouldn’t make today). I can’t fathom why they just wouldn’t scratch a number on each one so they could keep track. How can they tell when it’s time to service them or which robot is responsible for a malfunction?
Each positronic brain appears to be more or less a black box. Once it’s started there’s no telling what experience with the outside world might do. And once it’s started, they can’t take it apart and identify problems by looking at the hardware (and there’s no mention of software here – these stories were clearly written in the infancy of computer science; in fact, the stories make a distinction between computers and robots that we probably wouldn’t make today). I can’t fathom why they just wouldn’t scratch a number on each one so they could keep track. How can they tell when it’s time to service them or which robot is responsible for a malfunction?
So, in the absence of the blindingly obvious answer, Dr Calvin’s response is to test the robots’ adherence to the First Law with a series of tests that put the First and Third Laws into conflict. The ‘modified’ robot would be slightly more likely to protect itself rather than save the human that she puts in danger.
Unlike Powell and Dononvan, who relied on deduction and insight, Susan Calvin has a far more methodical approach. She tests and interviews al the robots repeatedly, measuring reactions and looking for the tell-tale split second that signifies the probability that one of the robots is the one she’s looking for. The solution in the end comes from something closer to insight methods used by the field agents, but she at least feels a bit like a proper scientist or engineer.
Like a lot of stories, this one turns on the potentially toxic nature of language on vulnerable minds. The robot goes missing when it’s ordered in strong terms to ‘go lose yourself’. It’s an odd formulation that we might more usually frame as ‘get lost’. ‘Go lose yourself’ sounds like a Google translation – maybe it’s a translation from English to Asimovian. More intriguingly, the modified robot talks with its fellows when they’re in storage between Dr Calvin’s tests and tips them off about how to subvert the tests.
It's curious to think about this in the context of the long-running, eternal even, debate on censorship. Asimov's robots are child like in their apprehension of the world and take things over-literally. In this story at least they also have that kind of self-obsession that eight-year olds have. I'm sure that when people say kids are psychopaths it's eight-year olds they have in mind. I suppose we've come a way from the dog-like loyalty of Robbie and the wind-up toy intellect of Runaround.
It's curious to think about this in the context of the long-running, eternal even, debate on censorship. Asimov's robots are child like in their apprehension of the world and take things over-literally. In this story at least they also have that kind of self-obsession that eight-year olds have. I'm sure that when people say kids are psychopaths it's eight-year olds they have in mind. I suppose we've come a way from the dog-like loyalty of Robbie and the wind-up toy intellect of Runaround.
Anyway, it’s a fun story, once one puts aside the serial number issue, with a satisfying conclusion that ties up nicely with the ideas put forward in the opening paragraphs. I wonder, though, how this matter of robot superiority is going to play out? Up to now the robots have appeared like over-literal children with odd and perky personalities. This story suggests deeper human emotions like resentment and the desire to be free. How far can our slave narrative go?
No comments:
Post a Comment
Note: only a member of this blog may post a comment.