1. Hmm. In some ways, I’m not feeling too bad at the moment, although my sinuses (well, one of them) are still throbbing a bit after that recent URTI. What is much more troublesome is the energy problem just now: I seem to be running on about half my normal available energy most of the time, and, most days recently, it just plummets mid-afternoon.
(
Read more... )
Comments 6
Reply
I don’t know if you have ever seen the BBC series Moonbase 3 (which, come to think of it, was made in 1973). It is set on the European Moonbase (bases One and Two are American and Russian, obviously) in 2003. If you see it now, it is quite peculiar (apart from the production values, which were actually pretty standard for 1973) for two reasons: one is the continuing presence of the Cold ( ... )
Reply
Lol. I have read several discussions relating to this very issue (most are probably on chronicles-network.com. I can't be more precise as I have a terrible memory. But I quite agree with you (even though my preferance is for fantasy.) Sci-fi is more than space exploration and stun guns, It's like these people who say 'but it can't be fantasy ~there's no swords or sorcery'. To me sci-fi is about how technology is used and dealing with advanced societies ... actually if I continue with this I'd really show myself up ( ... )
Reply
That’s an understandable viewpoint; the snag is, how do you demonstrate sentience? If a machine passes the Turing test, is it sentient or just well-enough programmed to deceive a human? How do you demonstrate people are sentient (as opposed to just assuming they are because they’re human)?
Tricky territory.
Reply
I can't say much right off the top of my head as I would probably end up talking out of my a*se, but I would say that you could demonstrate that an AI lifeform is a sentient being if it can behave beyond it's programming. Of course, if it's programme is to act beyond its programme, then sentience would be even harder to prove. (Isn't that what the Turing Test asks? Could you remind me?) Or you could get abit more Metaphysical(?) by saying that an AI lifeform demonstrates senitence when it is capable of making moral decisions (a la Dorlf in Pratchett's Feet of Clay) The ability to make a moral choice is, I think, what separates us from all other lifeforms. This ability is what makes us (apparently) 'superior'. It is the responsibility factor of being a moral being that explains why so many people turn their backs on morality. ... But then, if morality is a choice then even if you programmed AI to make moral choices between right and wrong, a senitent AI may choose not to follow your morality (a la God and Adam ( ... )
Reply
How do you recognise a moral decision? It is possible to make choices which might be interpreted as moral out of self-interest or even pure logic, depending on the circumstances. If I see you do something, say help someone you don't know in a way that is some inconvenience to you, perhaps I have seen you do something rooted in a moral sense; or I may have seen you do something because you think it will get you something (a job, a hot boyfriend, whatever). From the outside, there is no way of telling what is going on in your head. You'll find that some people will tend to ascribe moral or admirable motives to others' actions while other people will always tend to assume the basest motives; but there is no way of proving either group is more right than the other.
Reply
Leave a comment