Okay, seriously, enough politics.
I loves me some Babylon 5, and I'm enjoying Mass Effect and its sequel, and my feelings on Dr. Who are, uh, a matter of public record... but it's very satisfying, once in a while, to read some really hard-science science fiction.
Which brings me to Peter Watts, and particularly his novel (which he offers free online
here) Blindsight. Watts is better known for his Rifters trilogy (and as
amberdulen noted the other day, "The Things"), but I've been less satisfied with what I read of that. Blindsight, on the other hand... this is my interests in novel form.
The premise ("Weird crew in the future makes first contact!") sounds generic enough, but what makes the story interesting throughout is its focus on some fundamental questions of artificial intelligence, consciousness, and philosophy - and the book assumes either a familiarity with the subject already or a willingness to Google. The characters encounter a functional Chinese Room, which makes perfect conversation without any sense of meaning - sort of a chatbot taken to its ultimate level, and one of the persistent metaphors of AI.
Elsewhere, the captain needs to compare a set of multidimensional data points. It turns out that comparing measurements in five dimensions at once is something we're really bad at: there's just no way to plot that on a graph so that we can make sense of it. But what we can do, and what our brains are optimized to do really well, is identify faces and tell how one differs from another. So the captain employs what is, again, a real-world principle of data visualization: he displays the data as an array of faces, with the eyebrows spaced to represent one variable, the openness of the eyes mapping another, the width of the face a third, and so on. This is just a throwaway scene, no more than a page long - but it's the sort of thing that an augmented-intelligence future might actually try to do.
(The faces are all screaming, which is not typical. On the other hand, the captain is a vampire, so this makes a certain amount of sense.)
(In context, anyway. It's a very weird crew.)
The twist is that the aliens are after us because of consciousness: particularly, because they don't have any. That wouldn't be so bad, but it turns out consciousness is parisitic: a bad evolutionary deal, frittering away your resources on emotions and introspection when it could be devoted to other purposes.
The aliens have been devoting it to other purposes for a long time, and they're very, very smart - with the result being some of the weirdest Starfish Aliens I've seen. For all that, though, they can't make sense of all our qualia-driven messages - how can they, when these things serve exactly no purpose? - so clearly, these are an attack, an attempt to drain their own resources. The destruction of humanity will be self-defense.
What makes this interesting is that Watts does a very good job of addressing some of the major arguments on consciousness. One of Watts's favorite targets is the idea that life on earth has evolved along optimal paths; much of his work includes the suggestion that all we've found is a point of local optimization, good enough to dominate our local neighborhood (i.e., earth) but drastically suboptimal overall. That's one major argument that consciousness has to be good for something - it wouldn't have evolved, if it wasn't! - and Watts demolishes it mercilessly.
The problem is that Watts does acknowledge a difference between his aliens and humanity. These aren't "zombies" in the Daniel Dennett sense of the word, who perform all the same tasks as a conscious mind but without the consciousness. They're fundamentally different, working off something like an internal Chinese Room: a perfect pattern-matcher that identifies and responds to every situation with the ideal counter selected by its nigh-infinite decision trees. No effort wasted on understanding or pondering - just very, very advanced stimulus-response.
And that's precisely the issue, because Watts would have us believe that this is the ideal form of intelligence, beating all comers: perfect pattern-matching. Indeed, it seems to have humanity beat hands-down; for every situation, the aliens have an immediate, instinctive counter. They are indisputably more fit.
Now. But what about before?
See, if we accept evolution for the moment, nobody comes in armed with a perfect pattern-matcher. Nobody evolves already possessed of an exhaustive database of "IF/THEN" options that covers everything up to and including first contact with alien races in interplanetary starships. That has to be learned - earned - built, and the only reliable way to build a pattern-matcher is to let it fail until it starts to recognize the common elements in its few successes.
Note that we are, after fifty years of deliberately trying to do that for even some very simple patterns, still really bad at it. Guided development isn't producing very good pattern-matchers even for, say, playing Go. How are they going to do at interplanetary travel?
Want to start a fire? Trial and error until something happens to do so. Want to realize fire is useful? Exhaustive trial-and-error interactions with fire until you hit on a common theme. Realizing what counts as "useful" is even worse, because you don't get the luxury of starting with something like pain and pleasure - you don't get those, because those are qualia, and you aren't conscious. No, you've got something like an internal bean-counter, and every action you take either adds or subtracts beans - and by chance, some iteration of you will tend to do things that add beans and avoid those that subtract. A million iterations later, and you've got a handful of basic actions that randomly add or subtract the right number of beans. A few thousand more, and you breed out those with who add the wrong number of beans for interacting with fire. Repeat as necessary for every single innovation your race ever has.
What's notable here is that, at any point prior to the all-powerful Ultimate Pattern Matcher, consciousness beats your approach all hollow, because the conscious guy gets to innovate sensibly, without having to match a pattern. He doesn't have to lick the fire to see if that's a bad idea, and lose a generation breeding that behavior out. Further, if he can find any gaps in your decision tree, he can exploit it - deliberately trip the wrong rule to his own benefit. (Even the present-day aliens have had this happen accidentally, thus provoking the plot.) He can do this because he can comprehend the structure of your organization, if not its scope; you, the unconscious pattern-matcher, can extrapolate his behavior but never the underlying process.
Consciousness wins, because consciousness is good - but it's a credit to Watts's writing that he makes the alternative seem so tempting.