For once, here is me being vaguely optimistic about a new science!
Like most people, I am simultaneously excited and disturbed by
transhumanism. To-day I had a very interesting conversation with my friend Billy about the possibility of downloading a human mind into a computer program. We decided that the traditional argument that the computer would be delusional, thinking itself to be the original human, was illogical: if the program is an exact copy of the human mind at the moment of duplication (or at least starts out that way), it should be aware that you were contemplating copying your mind into a computer; and to go from that thesis to the conclusion that it itself is the computer in question is not that great a logical leap, especially since the computer should be aware of its own inability to do anything but think.
And the great thing is, if you're cool with the idea of copying yourself into a computer, the program should be fine with being a program. This eliminates the
Harlan Ellison problem. Because, really, why should an AI be anything but an AI? K9, for example, is a computer and never aspires to be anything else, but he is nevertheless made of badass.
Billy and I agree that this hypothetical technology should only ever be used for research purposes. Aside from the obvious applications to chaos theory (based upon measuring the difference between the organic and robotic minds), it would be an interesting exercise in philosophy of mind and philosophy of religion. If the AI is obviously inhuman, that would tend to point to the existence of some sort of soul that's not communicable to a computer program. On the other hand, if the AI acts reasonably human (within the context of being a self-satisfied AI), there are two ways to interpret it. The first interpretation is that souls are irrelevant. The second, touched on by C.S. Lewis in The Abolition of Man, is that souls are very relevant, but a far broader category than previously thought.
There is also the idea proposed by David Chalmers that it is possible to have a robotic soul that is functionally identical to, but ontologically distinct from, the sorts of souls that organic beings have. Whether or not this is relevant to anything but the possibility of immortality is for him and Dennett to argue about.
In general, I think that this is definitely an 'approach-with-caution' science. Assuming that blindly following the urge to mesh with our technology will lead to some sort of techno-utopia is stupid, but so is the out-of-hand rejection of all application of technology to philosophical problems.
Thoughts?