Meme Party III: Morality and Ethics

Oct 17, 2004 14:42

"An it harm none, do what you will." --Pagan philosophy

"A robot may not harm a human being, or through inaction, allow a human being to come to harm." --Isaac Asimov, The Three Laws of Robotics

Ideas colliding in my head again. Time for another meme party.

The shadowy creatures known as Auditors figure largely in Terry Pratchett's Discworld series. They are the ones who keep the Universe running, who ensure that a dropped custard pie will always go "splat" on the ground. In other words, they enforce the Universe's laws--although the Discworld doesn't have laws so much as guidelines.

Unfortunately, this tendency of the Discworld Universe to be inconsistent gives the Auditors great problems. They'd like their Universe to be like ours(?), one in which everything obeys laws, in which everything is neat and tidy and predictable. More than anything else, they hate Life, a force that makes all their equations go batty because it gives rise to Free Will. They attempt to wipe it out whenever they get a chance.

Auditors have no personalities, thus they don't understand anything that requires a personality to understand it. This includes messy human concepts like Love, Art, or Beauty. An Auditor once disassembled a fine painting, separating out the molecules of paint into piles organized by pigment, searching (in vain) for particles of Beauty.

To Pratchett, it would appear, the Auditors are the very embodiment of Evil.

Residing on the Discworld, within the cozy rambling confines (if you can use that word) of Unseen University, we find Ponder Stibbons, UU's newest professor. He is a research wizard, someone who seeks to understand the principles of magic. "Seeks" is putting it lightly: It drives him nuts, being unable to understand things. He and his long-haired associates at the High Energy Magic building are always talking about splitting the thaum (the basic particle of magic), which behavior the other wizards see as appallingly disrespectful.

The Discworld's wizards seem to be the closest thing it has to scientists, or perhaps, geeks in general. While I relate strongly with Sam Vimes of the City Watch, I also bear a lot of similarity to Ponder Stibbons.

What I find interesting is how Stibbons and the Auditors take a similar approach to understanding: They disassemble, looking for the basic elements. This, of course, is what scientists do in our Universe, and it has yielded all kinds of success. The computer I'm using to write this is the product of this same reductionist approach to analysis.

A week or so ago, I had an engrossing conversation on morality and ethics with zaratyst. She bemoaned the fact that I, and other geeks in her life, tend to discuss edge cases--extreme examples, such as genocide--in order to better clarify underlying moral principles. She seemed to be saying that we were behaving like the Auditors, attempting to make the Universe follow black-and-white rules, when morality is never like that: it is all shades of gray.

I'm not just a geek. I'm a computer programmer. I was programming computers when I was six.

Programming computers is the process of conveying an exact understanding of some subject to a machine which knows nothin' about nothin'. To do this, you have to first achieve that precise, entire understanding of the subject in question, down to the fine level of bits and bytes. This is a process of analysis, reductionist by its very nature. I used to marvel that we could have a field called "computer science". If we built the damn things, shouldn't we know everything there is to know about them? Astonishingly, the answer is "no". It's because we don't know how to explain concepts to a machine well enough for it to do its work. And so, we continue to analyze, breaking concepts apart into smaller and smaller concepts, while simultaneously looking for patterns that unite concepts. We struggle to understand how we think.

This is what I was born to do, and I think that's because my thinking style lends itself to this sort of analysis--like Ponder Stibbons. It may be no accident that Stibbons is the primary inventor of the Discworld's only known computer, Hex.

My conversation with Zara, which drove her to distraction, was symptomatic of that. She will tell me something, and I'll immediately attempt to dissect it, to pull it apart, to search for the exceptions and the flaws and the edge cases and the underlying themes. "Yes, but what if..."

Example: There's that word, "harm". When I hear a word like that, I want a precise definition, and Zara will undoubtedly tell you there isn't one. There's a vague definition, and people have to decide on an individual basis what harm is and what it isn't. Asimov's robots were programmed to be unable to commit harm, or to allow harm to be committed if they could prevent it. Yet nowhere do I remember Asimov defining "harm" precisely. I doubt he could have done so.

What is harmful? Murdering someone in cold blood is clearly harmful. Handing someone a chicken leg is clearly not harmful. And yet... even these cases are not 100% clear. What if the person you're murdering was about to blow up a building? What if the chicken leg is poisoned? (And does it matter whether you know it's poisoned?) These sorts of questions can keep me going indefinitely. For almost any case you can invent, I can think up a Devil's Advocate case to put forth.

Suppose you were called upon to rank various acts in order from most harmful to least. You might get something like this:

  • Destroying the Universe
  • Wiping out all life on Earth
  • Lying to start a war
  • Torturing someone for fun
  • Killing someone so you can take his jacket
  • Stealing an election
  • Releasing insufficiently-tested genetically modified organisms
  • Burning down a forest for fun
  • Stealing money from a poor family
  • Injecting someone with an addictive drug
  • Blackmailing someone
  • Torturing someone to get him to tell you where the bomb was hidden
  • Selling someone an addictive drug
  • Cutting taxes on the rich, in a time of deficit
  • Knocking down a forest so you can put up houses
  • Stealing money from a rich organization
  • Cutting taxes on all citizens equally, in a time of deficit
  • Using a pesticide on your crops
  • Lying to avoid paying taxes
  • Telling someone how to make an addictive drug
  • Driving your gas-guzzler to work
  • Driving your electric car to work, charged up using electricity from a coal-burning power plant
  • Distributing food to those who don't really need it, so they become dependent
  • Trying to convert someone to your own religion or moral code
  • Advising someone to vote a certain way
  • Discussing your own religion or moral code with someone who disagrees
  • Lying to spare someone's feelings
  • Eating a meal containing meat
  • Distributing food to those who are hungry
  • Writing a letter on paper (woe to the trees)
  • Eating a vegetarian meal
  • Eating a vegan meal
  • Advising someone to register to vote
  • Thinking
  • Doing nothing

If you presented this same list to many people and asked them all to sort it, I doubt you'd get the same answer twice. In fact, if you asked me to sort it again, I doubt I'd sort it the same way. There is so much context involved, and so many conflicting moral principles, that the decisions become nearly impossible to make.

And yet we must. We make decisions like this all the time.

One thing Zara mentioned that I found interesting is that social workers don't get to choose their own moral code. They must adopt a shared one, the one decided upon by the organization as a whole. They must follow certain steps to decide what the right course of action is. E.g., if someone appears to be in mortal danger, that must be your first consideration when deciding what to do. Essentially, it's a moral algorithm, but it's far too vague to be used by a computer. A human must do it, because any decision must be informed by the huge amount of context.

In the political world, elections decide who will be our new leaders. These leaders vote for or against laws according to their own conscience, their own moral code. When we vote, we are voting for a certain morality, a certain definition of terms like "harm", "good", and "evil". Is it more important to preserve the lives of Americans, or to be honest in our dealings with other countries? We don't have an agreed-upon moral code. We disagree, and elections are where these disagreements come to a head.

The world is full of different viewpoints. These viewpoints are at war. If you think of a viewpoint as a memeplex, then it is "survival of the fittest moral code". If I want my own viewpoint to succeed, I need to do what I can to promote it. To do that, I have to understand it, top to bottom, as well as why other viewpoints are inferior. Frankly, my own views are still being formed; I hold certain opinions strongly, but am still searching for The Way. I doubt I can alter my analytical approach, because that's how I'm wired up. I doubt I can stop thinking about it, because like Ponder Stibbons, I'm driven. Even if I could, I wouldn't want to. It's simply too important a duty to ignore.

I think this is OK, as long as I don't end up like the Auditors.

I don't know anyone who has better expressed both the promise and the danger of that place where science touches morality than Jacob Bronowski. A few bits of his wisdom:

"There is no absolute knowledge. And those who claim it, whether they are scientists or dogmatists, open the door to tragedy. All information is imperfect. We have to treat it with humility. That is the human condition; and that is what quantum physics says. I mean that literally."

"No science is immune to the infection of politics and the corruption of power."

"We have to close the distance between the push-button order and the human act. We have to touch people."

psychology, science, ethics, culture, meme party, programming, geek, politics

Previous post Next post
Up