Various random thoughts (or long ramblings, take your pick)

Mar 30, 2006 00:28

1. Hmm. In some ways, I’m not feeling too bad at the moment, although my sinuses (well, one of them) are still throbbing a bit after that recent URTI. What is much more troublesome is the energy problem just now: I seem to be running on about half my normal available energy most of the time, and, most days recently, it just plummets mid-afternoon. ( Read more... )

solaris, technology, pepys, stanislaw lem, the register, lrrb, green wing, id cards, tuttle city manager, abuse of power, john brunner, health, sf, the fury of the geeks

Leave a comment

Comments 6

wibbble March 29 2006, 21:45:46 UTC
Some of the best science fiction, IMO, doesn't date technologically because the stories aren't about mass storage media, or whatever. Things like that are just props ( ... )

Reply

tanngrisnir March 29 2006, 23:56:24 UTC
I’d say just about all good (never mind the best) SF isn’t about the technology. On the other hand, if you are looking at a supposedly very technologically advanced society and a bit of equipment fails because of a valve going, or people are agonising over getting time on a computer, or the best storage people have is tapes or microfilm, it brings you up short. I suppose what interests me is the assumptions we make about the possible. In the 60s and even the 70s, you would look long and hard for anyone who had the slightest idea that computers would ever be other than vast, room-occupying machines, and access to them a preciously guarded commodity.

I don’t know if you have ever seen the BBC series Moonbase 3 (which, come to think of it, was made in 1973). It is set on the European Moonbase (bases One and Two are American and Russian, obviously) in 2003. If you see it now, it is quite peculiar (apart from the production values, which were actually pretty standard for 1973) for two reasons: one is the continuing presence of the Cold ( ... )

Reply


hermi_nomi March 30 2006, 06:59:00 UTC
...if I hear one more fuckwit film producer or academic type say of some piece of SF (literary or cinematic) that it isn’t really SF because it isn’t about spaceships and laser pistols or doesn’t involve expensive special effects, I’m inclined to start finding out names and addresses...

Lol. I have read several discussions relating to this very issue (most are probably on chronicles-network.com. I can't be more precise as I have a terrible memory. But I quite agree with you (even though my preferance is for fantasy.) Sci-fi is more than space exploration and stun guns, It's like these people who say 'but it can't be fantasy ~there's no swords or sorcery'. To me sci-fi is about how technology is used and dealing with advanced societies ... actually if I continue with this I'd really show myself up ( ... )

Reply

tanngrisnir March 31 2006, 00:43:48 UTC
AI is AI, unless it can be proven to be as sentient as humans

That’s an understandable viewpoint; the snag is, how do you demonstrate sentience? If a machine passes the Turing test, is it sentient or just well-enough programmed to deceive a human? How do you demonstrate people are sentient (as opposed to just assuming they are because they’re human)?

Tricky territory.

Reply

Proving sentience hermi_nomi March 31 2006, 05:53:04 UTC
Philosophy debate :-))
I can't say much right off the top of my head as I would probably end up talking out of my a*se, but I would say that you could demonstrate that an AI lifeform is a sentient being if it can behave beyond it's programming. Of course, if it's programme is to act beyond its programme, then sentience would be even harder to prove. (Isn't that what the Turing Test asks? Could you remind me?) Or you could get abit more Metaphysical(?) by saying that an AI lifeform demonstrates senitence when it is capable of making moral decisions (a la Dorlf in Pratchett's Feet of Clay) The ability to make a moral choice is, I think, what separates us from all other lifeforms. This ability is what makes us (apparently) 'superior'. It is the responsibility factor of being a moral being that explains why so many people turn their backs on morality. ... But then, if morality is a choice then even if you programmed AI to make moral choices between right and wrong, a senitent AI may choose not to follow your morality (a la God and Adam ( ... )

Reply

Re: Proving sentience tanngrisnir March 31 2006, 09:11:27 UTC
The Turing test: can a human distinguish a machine from another human in conversation? This is usually done by setting up a text conversation so that there's no suggestion you are testing the efficacy of voice synthesis rather than anything else.

How do you recognise a moral decision? It is possible to make choices which might be interpreted as moral out of self-interest or even pure logic, depending on the circumstances. If I see you do something, say help someone you don't know in a way that is some inconvenience to you, perhaps I have seen you do something rooted in a moral sense; or I may have seen you do something because you think it will get you something (a job, a hot boyfriend, whatever). From the outside, there is no way of telling what is going on in your head. You'll find that some people will tend to ascribe moral or admirable motives to others' actions while other people will always tend to assume the basest motives; but there is no way of proving either group is more right than the other.

Reply


Leave a comment

Up