As I write these words, most of my clothes are in the dryer. I waited a couple days too many, so I had to dig pretty deep in my dresser to find jeans to wear while washing the ones I actually like. I'm wearing a pair of ridiculous skinny hipster-jeans right now, and it occurs to me that pairs of pants are really a lot like aspirations. It's important to have a few, but sometimes they're inconvenient. Sometimes they look really good, but they're not a comfortable fit, no matter how hard you try to get used to them. That's what happened with these hipster jeans; they were a symbol of someone I thought I wanted to be, but -- like usual -- I was wrong. So they spend most of their time in the bottom of the dresser, pulled out only when there's a dire need. Despite this they still have one thing in common with my other pants, the ones I like: no matter how well they fit or how seldom they're worn, they always wear out eventually.
A few years ago, I discovered a technique for modifying my own personality that works surprisingly well: whenever it's relevant, I mentally recite a song lyric, like a mantra, that captures the state of mind I want to possess. My freshman year at Vassar, the lyric was "I am a festival", which to my mind captured the brash self-confidence of The National's "
All the Wine" that I aspired to as a remedy for low self-esteem.
Lately, I've been trying on a new pair of pants and aspiring to be more rational, which has led me to plant most of Against Me!'s "
Stop!" in my head.
For the past couple years, I've had a pretty clear idea of what I want to do with my life: program video games. I'm approximately on that path now, and it's every bit as attractive as it was when I was twelve. Or when I was twenty. I like programming computers and making games; this has not changed. The jeans still fit, but they're starting to look a little frayed around the edges. They might just be accumulating the character and personality that shiny new things often lack. But maybe they're wearing out; maybe I'll walk out the door one morning and people will starting laughing at the pink hearts on my underwear. I'm really not sure. Whatever the truth is, I could probably mend them, though the first case would probably be easier; either way, I haven't figured out if it's worth the effort.
The thing is, I wouldn't have noticed those frayed edges if I didn't have a couple of shiny new alternatives calling to me. Well, no, that's not strictly true. One of them is really an old pair of jeans that I used to really like but couldn't quite fit into; after a while, they were just too uncomfortable and I couldn't keep wearing them. But ever since I started
losing weight, well, they've been awfully tempting. I've been having as many neat ideas as I've always had before, and now, now I can write. I can sit down to a blank page and focus and then I look up three hours later and the words are there; the right words, even, at least as good as the ones I would have written before. Those pants were always my favorite and they're hardly worn out at all. Now that they fit, it's quite tempting to put them back on and see how much life they have left in them and whether they're still in fashion.
And then there's the slate-grey, serious-business, slightly-intimidating slacks that I've had my eye on. I shit my pants every time I think about
existential risk, so I think it's time to dispense with this extended metaphor.
There are a handful of natural disasters that have always stood a small chance of entirely annihilating the human race: meteor strikes, supervolcanoes, and other such rarities. But in the past century, we've begun to discover pieces of technology that we could use to do the job ourselves.
Very smart people have speculated about the possibility that we will destroy ourselves via any one of:
- Global thermonuclear war.
- Genetically engineered diseases.
- Destruction of the planet's ecosystem (via, e.g., global warming).
- Nanotechnology, of both self-replicating (grey goo) and more mundane varieties.
- Unfriendly Artificial General Intelligence, which includes every possible AI that may end up indifferent to human values, and most that get them even slightly wrong. And given that AIs can self-modify, this is far, far from a trivial problem.
- Many, many more.
Those are just the ones we've thought of so far. There are likely others we have yet to imagine, and they may destroy us before we are aware they threaten us. The problem with every technology on the list above is that they would each be an enormous positive if we were to get them right, and they are incredibly easy to get slightly wrong. And once they have been made, it is too late. By the time that a brilliant, selfless scientist who spent his life trying to better mankind notices that he flipped a sign in one of his equations, he may already be in the process of being disassembled and having his component atoms turned into
paperclips.
I am scared. No, scratch that. I am terrified. I'm not having nightmares, because this is too serious for nightmares: nightmares go away when you wake up, nightmares are just a runaway imagination acting out. My fear doesn't generate nightmares because I am afraid of something so awful that no human imagination can capture it. I've come to terms with the
possibility that I will someday return to the emptiness from which I sprang. It is a terrible fate and I will fight to avoid it, but there are some means that end cannot justify.
Existential threats are not like that. There is nothing, nothing I would not do if it could prevent the extinction of human intelligence; it is incomparably valuable at any price. If I could live a hundred centuries, every waking minute of the rest of my life would be an incredible bargain if I could use them to reduce the odds of a single existential disaster by a tenth of a percent.
I have not always believed this. I can not truly imagine what the end of six billion lives would be like, and there was a time when this was enough to turn my brain off. Then I learned to
shut up and multiply, and the moral obligation that the numbers implied was so heavy that my brain turned off again rather than face it.
My brain did not stay off forever. And as I was rebooting, I found the piece I was missing: I found an old discussion in which someone had asked Eliezer Yudkowsky what to do about feeling depressed whenever they thought about existential risk, just as I had been feeling. And the Yudkowsky replied,
"
Yeah, shut up and save the world."
And what can I say to that? Who am I to defy the awful numbers? It's become painfully obvious to me that if I can to do anything at all to help, I should devote the rest of my life to working to minimize existential risk; given my skills and inclinations, this would probably involve work on
Friendly AI, though I'd be very interested in hearing about alternative approaches. This is the third option. But I started writing this post with the intention of asking for advice, and I still need it; things are not as clear-cut as they sound. I'm not sure that I would be helping. If I decide to shut up and save the world, I might make a mistake and make things worse.
I can't ignore that chance. This is too important for self-doubt, too important for self-confidence. If I cannot expect a positive return on my efforts, I would be a monster to ignore the odds and try anyway. But if, on the balance, I could make some contribution, however insignificant, I would be a monster not to.
So that's the advice I'm asking for: do you trust me to take that risk? Be honest. This is bigger than my ego. If you don't believe that there are existential risks worth worrying about, try to pretend that you do and answer anyway. And then I'd be interested to hear why, but that's a separate issue.
There are some people who will read all this and think that I am being melodramatic, that I am making a mountain of a molehill, that this is not the incredibly urgent matter of life and death that I seem to think it is. I will not sugarcoat this: they are wrong. They are relying on their intuition, but this is too big for intuition. We can not properly imagine what is involved because there has never been an existential event. The fact of our existence precludes it.
If you think that I am exaggerating, then you have not yet realized that
your brain can't multiply by eight, much less 6,000,000,000. Until you do, please stay quiet and let the grown-ups talk.
Anyway, my laundry's done. I'm going to have to put on some pants pretty soon, and it's really too bad that wearing two pairs at once is ridiculous.