I, Robot -- Installment One

Sep 08, 2010 01:32

When discussing I, Robot, lilisonna said:

We’ll start with the fundamentals; the three laws of robotics. Do they make sense? If you were building an autonomous robot, do you think that you’d incorporate any of the three laws? All of them?

Just to remind those playing from home (because I had to look up the exact wording myself), the Three Laws are:
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I wonder if putting a fourth law in there concerning allowing humans free will would keep the robots from taking over in the end. It would have to be a strong one to overcome the ‘allowing a human being to come to harm’ clause and I’m not sure how it could be phrased to allow human mistakes/risks but to prevent murders and the like.

Robocop has it easier with “Protect the Innocent" and "Uphold the Law" but those seem even more twisty, as that means the robot has to know which laws are applicable with what amount of force and, even worse, who’s the innocent party in an immediate assessment.

Still, I think /some/ laws would be important. In “Little Lost Robot,” Asimov expounds at some length of the potential drawbacks of weakening the First Law.

I’m inclined to say that we’d never build completely autonomous robot (and possibly that the three laws, in fact, mean that Asimov’s robots aren’t autonomous either) because that’s just not how robotics have come to be used. In general, our current society uses robots to perform very specific and repeatable tasks. The military uses robots a lot (and would love to use them more). And while for right now most of the “harming humans” bits are still controlled by humans, there are always calls for papers on Thinking Attack Robots. DARPA would love for someone to build them an unmanned attack vehicle that could accurately decide to fire upon a designated target.

However, there’s still an underlying fascination with Asimov’s laws. Maybe it’s he got there first, or there’s some resonance with the laws in general, but even in military robotics circles, there’s some attention paid to the Three Laws of Robotics.

As society is now, I agree that we wouldn’t be comfortable with completely autonomous robots. Even the idea of self-driving cars gives me the heebie-jeebies. If someone viruses my computer, it’s annoying but not immediately life-threatening. (For all that Weird Al would tell you otherwise in Virus Alert.)

I was very amused by the following tongue-in-cheek military laws by David Langford:
    A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
    A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
    A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
Late now, must fall over.

book club

Previous post
Up