Vericon 2016 GoH Speech

Jan 16, 2017 09:51


Last year I was GoH at Vericon. It was a fabulous time! And while I was there, I gave a GoH speech. It’s the only one I’ve given so far, because usually organizers say something like “you can give a speech, or someone can interview you” and I go “INTERVIEW yes please interview me.”

But for Vericon I gave a speech. I wrote it out very carefully, and printed it out and then marked it all up on the plane, and then I didn’t actually read it, that felt weird, I just kind of talked using the printed speech as an outline. So the text below isn’t exactly what I said that day. But it’s close. And I’m rearranging and reorganizing my office and filing tons of things that need to be filed, and this is one of them, and I’d been meaning to post it, so. Here you go.

There’s a thing that happens when someone criticizes a story or a movie-or a game-in public. You can almost set your watch by it. Somebody is going to turn up to tell you that it’s just a story. You’re overthinking things, it’s just entertainment!

So, maybe I’ve got a stake in saying this-well, definitely I’ve got a stake in saying this, because at this point in my life I make my living telling stories. But stories are important. Stories are how we make sense out of the incredibly noisy and complicated world around us, and how we make sense of what the people around us do. How we make sense of ourselves. I’m convinced that narrative is a basic mode of human thought, and all the stories we hear and read become templates that we can use to understand our lives.

This can be a very positive thing. We can tell ourselves a story about what might happen, if certain other things have already happened, a story that lets us see patterns and predict how those patterns might play out in the future. It lets us anticipate and prepare for things that otherwise might take us by surprise. It helps us create new things. That’s an amazing tool to have. And narratives often hook right into our emotions, emotions that are, I am convinced, a crucial part of our decision-making process. Some decisions are actually very difficult to make without emotions, and in a crisis you don’t want to waste valuable time doing a careful, logical comparison. You want to act fast. Emotions-and the stories that elicit them-are a way to help you do that. So maybe you’ve never met an angry bear. But maybe you know a lot of stories about the tragic and frightening things that happen when people meet angry bears and so when you do meet one you know without having to stop and think it through that you’re in danger. Those stories might not even all be about bears, specifically, but there’s enough similarity between angry bears and angry lions and angry targs that when you actually meet that bear you can make a super quick assessment of the situation.

But there’s a negative side, too. We mostly don’t think of ourselves as reacting to the world based on stories. It mostly just feels like we’re seeing things as they really are. But those narratives aren’t just organizing what we’re seeing, they’re shaping and filtering it, organizing what we experience so that what we experience fits into the narrative frame we’re using. Sometimes this isn’t a big deal, but sometimes the choice of narrative frame can make the difference between life and death.

So, let’s talk robots and artificial intelligence. There’s been a fair amount of comment recently on the potential dangers of AI. Elon Musk thinks maybe a superintelligent spam filter might set out to kill all humans because that’s the most efficient way to eliminate spam. Stephen Hawking thinks there’s a real danger superintelligent AIs might out-evolve us, which might lead to our extinction. They’re both hoary old science fiction tropes, and Dr Hawking you’re fabulous at physics but, dude, that’s not even how evolution works.

Plus, we’re maybe kind of jumping the gun a little. AI is way more impressive than it was even a few years ago, and it’s getting better all the time, but-our new World Go champion notwithstanding-we’re a ways from even basic intelligence, let alone the super kind. It’s cool that OK Google can tell me the weather-usually, there was that time I’d been in Oslo a nearly a week and asked Google what was the temperature outside and it told me how warm it was in St Louis-and maybe list some restaurants, or sometimes pull up a relevant Wikipedia article. But it can’t do much more than that. It certainly can’t think for itself. Google’s not alone in that-I don’t see Siri and Cortana teaming up to wipe humanity from the Earth any time soon. And if Alexa tries taking over we can just turn on our radios and let NPR tell her what to do.

Or maybe I’m wrong. Maybe Skynet has a dastardly plan to enslave humanity by beating us at Go and/or repeatedly saying “I’m sorry, I can’t help you with that” over and over. But I think it’s going to be quite a while before we see self aware AIs smart enough to plot the end of Humanity. And honestly I wouldn’t bet on that being something super smart AIs would want.

But we tend to think that if they’re smart, they’ll think like we do. So our ideas about the dangers of AI are inescapably ideas about the dangers of other people. And when it comes to AI, it’s a particular sort of people we’re using as a model.

The very first robot story-the first ever use of the word “robot” in fact-is a robot uprising story. But when Karel Čapek wrote RUR he wasn’t worried about artificial intelligence. The robots of his story aren’t mechanical, they’re made of some sort of synthetic biological material. And the word “robot” which Čapek famously coined, comes from a Czech word for “slave.” It’s a story about the revolt of people made on an assembly line (the first actual assembly line had debuted just ten years earlier). It’s a story about the rebellion of people who were built to be the cheapest, most efficient workers possible, workers you didn’t have to pay, or feed anything in particular, or take any notice or care of. In other words, slaves.

And Čapek ‘s story hit a nerve. It didn’t just give us the word for robot, it is the ultimate model for nearly all the robot uprising stories since. So that model-robots as slaves, with all the assumed dangers attendant on enslaving people who outnumber you-is the model we’re using when we think about super smart machines. This has not been lost on any number of science fiction writers, who have used robot and AI stories to comment explicitly on oppression and racism. But just personally-well, I won’t go into my problems with the whole “slaves in my allegory are machines and the masters are human beings” bit, though that’s kind of icky when you think about it, but on top of that I think it’s a dangerous model to use as a basis for actual, serious real world predictions about artificial intelligence.

It’s demonstrably a dangerous model to use for interactions with humans. If you’re white, you probably know what it’s like to go into a majority non-white neighborhood. Possibly you shorthand that as a “bad” neighborhood, or a “dangerous” one. You probably feel intimidated, even threatened. The hostility of the people living there is assumed.

So what’s dangerous about that, besides the danger a white person is in? Well, it’s not the white person who’s in danger. This is not an abstract question for me, I know very precisely what happens when a young white woman on her own is stranded in the middle of the night in a poor, majority black neighborhood: The people there help her. They offer to change her flat tire (I didn’t have a spare), and when she walks half a mile down the road to an all-night gas station they let her into the locked cashier’s cubicle so she can call Triple A (cell phones weren’t really a thing at the time). And while she waits for the tow truck, the people who stop for gas ask, “Miss, are you all right? Do you need some help?” And she says no, and they say “All right, I just wanted to be sure you were okay.” And they buy their gas and go.

And when the tow truck driver comes he says something like “Wow, this is a really dangerous neighborhood, and it’s the middle of the night! You must have been scared. You’re lucky nothing happened to you.” And she says, “Actually, everyone I ran into was pretty helpful.”

Because people generally are. Why should anyone expect differently? And I will admit to you, while I was sitting there outside that gas station, I expected differently, and I was surprised at what actually happened. When really I shouldn’t have been. So why was I?

Because of the narrative. The one about oppressed people who’ll rise up if given half a chance and do us the way we’ve done them.

So, one result of this narrative is that often white people who find themselves in majority non-white neighborhoods are needlessly afraid of the people around them.

Another result? When, say, a young Black woman knocks on a door in the middle of the night looking for help, the White homeowner will assume she’s a threat and shoot her to death. When Black men confront police-or, let’s be entirely frank, sometimes when the police are faced with Black children-the police assume they’re threats and shoot them. This narrative is not harmless. People have died because of it. Lots of people.

And it’s this same model so many people are using to seriously predict the effect of AI on our future. It doesn’t even work right now, for dealing with other people. But that fact is invisible to a lot of people, because we don’t think much about the narratives we use to make sense out of the world. And narratives, they’re sticky. In her Hugo-winning essay “We Have Always Fought” Kameron Hurley gives the made up example of the things everyone knows about llamas-they’re scaly, right? And carnivorous? Everyone knows that, it’s common knowledge. And sure, sometimes you run into a llama that’s fuzzy and eats grass-maybe the only llamas you’ve ever met have a distaste for meat and a distinct lack of scales, but those are exceptions! They have to be, because everyone knows what llamas are like.

That’s how unexamined narratives work. Exception after exception won’t change our assuming the truth of the narrative. Nothing will change that, except our recognizing it as a narrative-not reality itself, but a frame that forces reality into a pattern we’re familiar with.

So all the high profile alarm about the dangers of AI isn’t a problem just because the predictions people are seriously making might be inaccurate. It’s that when I hear people make these predictions, it’s not only really obvious that they’re using that narrative for a framework, but more importantly, that they’re unaware of it, and so almost certainly unaware of the way it’s applied to actual, existing human beings. This doesn’t make Elon Musk, or Bill Gates, or Stephen Hawking or anyone else racists, or terrible people-we are all, to some extent, unable to escape the narratives that surround us, and that we frame our lives with. But it does mean that the narrative gets reinforced, by people who command lots of respect and large audiences.

And I’m not at all saying that writers who use that narrative, or readers who enjoy reading it, are bad people or necessarily doing anything wrong. I’m not into telling writers what they should write, or readers what they should read-I’m actively opposed to that, in fact. Writers should write the stories they want to write, the way they want to write them. Readers should read the stories they want to read. But I do think it’s important for writers and readers both to be aware of the narratives they’re drawing on, and how those narratives might distort our view of the world, and influence our attitudes and choices. Like technology itself, narrative can be applied in ways that are beneficial, or in ways that are not.

And like technology itself, knowing what the potential effects are is crucial to avoiding negative outcomes. And it’s the thing you don’t see, that you don’t realize exists, that will trip you up. Knowing the narrative is there, you can work with it or work around it, maybe consciously choose a better one for whatever situation you’re in. If you don’t recognize or acknowledge its existence, you will have no choice in the matter. An angry bear right out in the open, that you know is angry, is far less dangerous than the angry bear behind some underbrush in a woods that everyone knows is entirely safe and bear-free. Your hike, wherever it takes you, will be the better for your having an actual, accurate idea of where the bears are.

Now, I do think it’s important to consider the possible effects of creating actual AIs. Very few new technologies have been unambiguously good, or implemented as well as they might be, and it’s wise to think ahead and avoid what dangers we can. But let’s take some time to separate things we’re assuming are true because they’re part of a familiar story that feels realistic to us, and things that are actually true. And let’s maybe consider how the things we fear about AI are literally fears about other people, and the way that addressing those fears directly might actually move us toward some solutions to real problems, and allow us to see real dangers ahead much more clearly.

Thank you.

So, then I took questions, and a person in the audience who was, in fact, an expert with relevant degrees, pointed out that there are already problems with AIs that have nothing to do with the Robot Uprising and everything to do with the fact that the data all these neural nets are taking in is not, in fact, completely neutral and objective but comes loaded with a host of prejudices and assumptions. We assume that if a computer gives us the result it’s perfectly objective and without any kind of flaw, but even if AI logic is completely objective (not an assumption I think we should make, but let’s do that for the sake of argument), its conclusions won’t be objective if the data it’s working with isn’t. This can have seemingly small effects-Netflix steers certain people to watch certain things, making parts of its inventory effectively invisible to certain groups of viewers-to situations where people don’t even get to see job listings because they don’t fit a certain set of demographic characteristics, or completely law abiding citizens end up on lists of people likely to commit violent crime, because of course the algorithm is using historical data and we’re going to ignore the way that historically black citizens have been-and are-disproportionately arrested and convicted for particular crimes, crimes that are also committed by plenty of white citizens but they don’t get the same reaction from the justice system.

In these cases, the supposed perfect objectivity of the AI is just reinforcing existing cultural assumptions. But that’s a big ‘just’ and it’s one that has very real, life altering and life threatening consequences for quite a few people. So, you know, when you’re worrying about the danger of super-intelligent AIs, maybe add that to your list.

Mirrored from Ann Leckie.

misc

Previous post Next post
Up