Dec Talking Meme Dec 27 - Artificial Intelligence

Dec 27, 2013 17:47

blindmapmaker asked: Robots/AI or any more or less mechanical/digital life-form. Is SF better off with using them a lot or better without them? (The Culture vs. Vorkosigan for example).

I love this question!

I like AI, yet at the same time I'm often dissatisfied with the way it is treated in science-fiction. Usually, artificial intelligences (be they robots, spaceships, digital lifeforms, cyborgs, holograms or whatever) in science-fiction fit into one of these categories, all of which could just as easily apply to aliens:

1) Monsters - AI is often portrayed as unemotional and therefore ruthless/evil. Quite often they're also inimical to biological life. Examples would be the machines in Terminator or Matrix, the Borg in Star Trek (the Borg are cyborgs, but their collective consciousness is only achieved via their machine parts, so they qualify at least partly as AI) or the Cylons in the original Battlestar Galactica. If they're just plain evil, this is usually boring, but sometimes it works because artifical intelligence can be very alien and therefore uncanny, and its motivations aren't necessarily the same as those of human villains - a good example would be "Dark Star" in which the sentient bomb explodes not because it is evil but because it believes that this is the only way it can prove its existence.

2) People - the majority of all AI characters probably falls into this category. From a storytelling persepctive, these artifical lifeforms are really just people with unusual personalities or abilities. Examples would be Data or the holographic Doctor from Star Trek, ORAC from Blake's 7, JARVIS in the Iron Man films, K9 in Doctor Who, the human cylons in the new Battlestar Galactica, Cameron in Terminator: The Sarah Connor Chronicles etc. These characters all have personalities that could just as easily belong to a human or a biological alien, although in some cases, there are some uncertainties - are the Voyager's EMH or ORAC sentient or just very good simulations (both of them would pass the Turing test easily, but I don't think that really proves anything)? Can Data or Cameron experience feelings the way humans do? What makes these characters interesting is on the one hand their struggle to become fully formed individuals but on the other hand also their potential to grow beyond human limits. They can access and store more information than a human could in an entire lifetime, and they can process it much more quickly. Sometimes, physical embodiment isn't as important to them as it is to biological lifeforms, or they are capable of changing their bodies almost indefinitely. AI is almost invariably not mortal. Detractors of these characters usually complain that they're too powerful or versatile (and it is true that once you've got a Data or an ORAC, you hardly need humans anymore - which is one of the main points of the Culture series, in which humans would be completely superfluous except the AI characters gladly indulge them and consider them worthwile companions) but I love it when these possibilities are fully explored (such as in Voyager's "Renaissance Man" or "Tinker Tenor Doctor Spy"). What annoys me a great deal is when AI characters believe that their ONLY purpose in life is to become more human - why would they, when that is only one of many options?

3) Gods - once you take the potential to its logical conclusion, AI can and does become godlike (usually not an infallible God as in monotheism but at least divinely powerful as in polytheism). In some cases, their power is limited to a certain realm, such as in Shadowrun or Neuromancer, where the AIs exist in the Matrix and can influence the outside world only indirectly. In other cases it isn't. Many sci-fi fans (and also some roleplayers I've met) complain that once you get godlike characters, a universe becomes boring and everyone who isn't godlike could just as well quit. I think that's silly - but then I love mythology and fantasy. There's something very appealing (in a wishfulfillment fantasy kind of way) about the idea of benevolent or mysterious super beings. I do however prefer stories in which these godlike beings do not interfere with lesser beings to the point where they cease to have free will.

As for the second part of the question: the Vorkosigan novels and the Culture novels really are quite similar in many ways, aren't they? Banks and Bujold are both excellent when it comes to gender equality and exploring the possibilties future technology offers in that area, both of them love to play with the contrast between very liberal, highly advanced societies (Beta Colony and the Culture) and sort of primitive, feudal/classist/capitalist societies. Cordelia's Honor, the first Vorkosigan novel, is what Use of Weapons would be if Use of Weapons was about Diziet Sma falling in love with Zakalwe and deciding to spend the rest of her life with him on some barbaric non-Culture planet - of course Zakalwe is a thousand times more fucked-up than Aral Vorkosigan, and the Culture is way cooler than Beta Colony, so it's not surprising that Diziet doesn't do this... and also I think it'd be a massive abuse of power if Diziet entered into a relationship with Zakalwe, on top of all the other shit the Culture does to him: and indeed, it's the one thing she doesn't do. Anyway, back to AI!

I really think science-fiction without artificial intelligences needs to explain WHY there is no AI, because it seems to me that AI is one of the most likely technological developments once you've got computation technology. I don't think the Vorkosigan series ever explains why there are no artifical lifeforms. Probably Bujold, who doesn't do sentient aliens either, decided that she didn't need aliens or AI for category 1 and category 2 (you can have weird and evil humans just as easily, and indeed some of the human civilizations in the Vorkosigan universe would definitely be aliens if this were Star Trek - for example the Cetagandans) and that she didn't want category 3 beings in her universe. That's the outside explanation, but there's no good in-universe explanation (unless I've forgotten it!). Some of the early Vorkosigan novels are more fun than the Culture novels, because Banks's human protagonists (especially in "Consider Phlebas"!) are much less engaging than Bujold's characters, but otoh the Vorkosigan novels get successively less good (Captain Vorpatril's Alliance was AWFUL) whereas Banks actually improves or stays the same in terms of quality.

As I said, I like both category 2 and 3 AI characters, so I have no problem with the Culture series. If you're not familiar with the Culture series, it's basically the Star Trek universe if the Federation managed to survive for another few millenia, formed a galactic civilisation of humanoid races and also developed their AI until it reached a very benevolent category 3 (which might not happen in the Star Trek universe, since they have a habit of treating their AI very badly unless someone does a "Measure of a Man"). The Culture is a socialist-liberal post-scarcity utopia. Basically, everything went perfect for this civilisation and they're as happy and powerful as can be, and its mostly due to the fact that the Minds, their godlike AIs, organize most things. There are also category 2 AIs, the drones, but I find them rather boring because they're usually just quirky people with a (non-anthropomorphic) robot body.

What makes the Culture novels interesting is a) it's a functioning and well-described utopia (world building makes up 60 % or more of Banks's narration) that doesn't shy away from exploring potential weak points and b) the Culture obviously comes into conflict with other civilisations, usually because the Culture believes in the Prime Directive about as much as your average Starfleet Captain, i.e. when it suits them, which it rarely does and c) it's never quite clear to what extent the Minds and the humans really are on equal footing. Outsiders usually believe that the Minds rule the Culture and the humans are little more than indulged pets, and the series implies that this is to some extent true. Which isn't that different from the situation in, say, Greek or Hindu or Norse Mythology, so obviously I enjoy it a great deal.

Fascinatingly, in some of the novels (Excession, Look to Windward and Surface Detail, which are among my favorites), it is implied that one of the very few remaining taboos in the Culture, which is otherwise extremely liberal when it comes to personal life choices (as in: they're a society of Captain Jack Harknesses), is Minds and humans getting too close to each other. Their worst insult one can direct at a Mind is "Meatfucker" (humans also use this as a swear word, but it doesn't carry the same weight as it does when Minds use it) - it's not a coincidence that it sounds like motherfucker, because this is the Culture version of the incest taboo (although presumably they do have an actual incest taboo as well). In Look to Windward, the whole plot hinges on the fact that Minds do not EVER use their powers to invade human thoughts, even though they are for all intents and purposes telepathic and do use these powers to communicate with each other - but not with humans. In Excession, there's a ship, the Grey Area, which has been re-named "Meatfucker" by its fellow Minds and is ostracised because it uses its telepathy to uncover war-crimes committed by less-developed civilisations - and it also kills the perpetrators. In Surface Detail, finally, the ship Falling Outside Normal Constraints is called meatfucker by its scandalized peers because he uses these powers to turn human bodies into his avatars (and it's more or less explicitly stated that he has had sex with humans while using such a "meat" avatar). No reason is ever given in the series as to why this taboo exists - presumably the reason the Minds would give if asked is that "meatfucking" constitutes an abuse of power, but if I remember "Totem and Taboo" correctly, Freud says that one of the reasons taboos have an almost magical power is that they usually concern our oldest and strongest desires. It would make A LOT of sense in the Culture series if the Minds had a deep-seated desire for an (impossible) union with humans, because that would explain why they stick around at all.

As a side thought: there are a few AI-related plots that Banks never did in any of his novels and that I would have LOVED to read. We never got a story in which the Culture interferes with a lower level civilisation that enslaves its AI (that'd also be a cool plot for a Culture/Star Trek crossover) or a lower level civilisation in which artificial lifeforms are at war with their biological creators (crossover with BSG!) or a story in which an outside AI encounters the Culture's Minds and asks: why the hell are you so fascinated by these meatbags?! Plus there's the question of how it all ends - most the higher level civilisations in the Culture universe eventually "sublime" (that is, they link all their minds together and become gods, sort of like the Q Continuum), but it appears that only biological lifeforms do this. So what happens when the human side of the Culture sublimes?

And because I'm watching Blake's 7 at the moment and spoilers tell me that ORAC survives the show until the end: he'd be a prime candidate for joining the Culture, because he's basically a Culture Mind already (they're mostly benevolent, that doesn't mean they're nice) and he deserves to be treated a little better. Or he could just go and meet the holographic Doctor and they could bitch about being switched off and on whenever the humans please...
Previous post Next post
Up