Science of Sci-Fi

May 13, 2007 19:09

Well, while typing the previous post, I watched a TV program called "The Science Of Stargate SG1". Now, there are two separate issues here.

Firstly, there is the obvious one, that it is pure geekdome to want to explore the scientific rationale behind programmes such as Stargate. This is akin to the Trekies who can tell you exactly what each of those buttons on the NG Enterprise's control panels do! Don't tell me that we may be able to create wormholes by "bending" space-time, I don't care. I care even less that the technology to bend space is so far beyond us at the moment.

However, perhaps I should care? You see, the other thing that came out of that program was the fact that technology is not so very far from producing some of the things in the show. Just as Star Trek brought us automatic sliding doors and flip-phones, so now, science is working on self aware robots and nanotechnology, and there is even one guy who has implanted a microchip into his nervous system. His next plan is to implant one into his brain...

The trouble with self aware robots is that you are then in the realms of Azimov and/or Terminator. Isaac Azimov created the Three Laws Of Robotics.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These three laws are very well worded. Note the hierarchy, preventing possible conflicts. If I order a robot to self destruct, it must do so, however it may not choose to self destruct. If I order a robot to kill me, or someone else, it may not do so. If a robot sees a human about to be harmed, it must attempt to prevent this even if this means disobeying a direct order from another human or putting its own existence in jeopardy.

The only scenario in which a robot would have a conflict of interests is if, by preventing harm to one human it were to cause harm (or fail to prevent it) to another.

Without this built in safe-guard, we come across the problem found in the Terminator series, and many other sci-fi stories. Self aware computers realise that they are not human. They realise that humans are essentially inferior to them, but at the same time have the ability to control them. What is the solution?

Technology has created nano-bots already. They are manipulating diseases on a molecular level with some spectacular results. But what if these nanites get out of control? What if they develop a fault and start replicating at a massive rate inside someone's body?

Science has also produced robots that can work together in a linked network. NASA apparently has one that, by itself is a clumsy looking thing that wobbles around like two Japanese guys inside a big black cloth bag pretending to be an amoeba. However, linked to three other robots the same, it becomes a powerful tool, able to climb walls, squeeze through small gaps, etc.

I bet the military is funding that one!

As Jeff Goldblum's character in Jurassic Park said, "You were so busy wondering if you could, you forgot to consider whether you should!"

rants, scary facts!, sci-fi, science, computers

Previous post Next post
Up