A Science Fiction Look at Ethics: Brainstorming Ideas

Mar 25, 2014 10:03

Recycle Alert: This originally appeared in my Science Fiction Adventure Zine about a year and a half ago.

This is a set of questions that popped into my head about science fiction and ethics. Most of them aren’t original to me, but I think they’re worthy of discussion and hopefully I add a few twists.

1) If you could go back in time and save an animal species with no repercussions except for the survival of that species, how far would you go to do it? Let's say you could go back and kill one person and that would be the tipping point that lets passenger pigeons survive. In reality that would have a horde of cascading impacts and it's quite likely that most of us would never be born, but let's say you could do it with no further changes. One person dies. One species lives. Would you choose killing the person? What if it would take killing half a dozen people? A hundred people? What if it was a more appealing or scientifically valuable species? Maybe Tasmanian Wolves or Madagascar's recently extinct ape and monkey-like lemurs. Would that change your thinking? What if the species would be economically valuable once it was resurrected?

2) What if we're looking into the future instead of the past? What if the choice was one existing species versus one human or group of humans? How many humans would continued life for the giant panda be worth? What about chimpanzees or gorillas or orangs or bonobos? How about the continued existence of a peculiar type of field mouse, or of a lizard or an insect? What if you knew that if one person lived, one of those species was doomed? What if the choice was between a thousand people dying and wild chimpanzees becoming extinct? What if the choice was between those people and one subspecies of chimps rather than the whole species? What if the key person or people had no direct connection to the species--didn’t hunt them, didn’t build a dam that destroyed their habitat, had no idea what they did was harmful? What if the issue was an entire unique habitat, like New Guinea's unique rain forest marsupials or Madagascar's only place on Earth Lemurs?

3) Let’s go yet another level. Humans have been responsible for the extinction of quite a few species, including some sapient ones if you count Neanderthals and Denosovians as separate species. Let’s say that technology advances enough that we can rebuild some of those species. Let’s start with Mammoths or Tasmanian Wolves. We have considerable DNA from both species. What ethical obligation if any do we have to rebuild them if it becomes possible? Now let’s move up a level. We have a reasonably complete gene map of Neanderthals. We could probably rebuild them. What are the ethics of doing that? There are a variety of models for how intelligent Neanderthals were. What if they overlapped human intelligence but their average intelligence was clearly and obviously lower? How would we cope with a population we recreated that couldn’t compete in our society? What if we remove the part about recreating them? What if Neanderthals survived naturally somewhere? What obligations would we have then?

4) Let’s go back to the species extinction issue and kick that question to the next level. Let’s say we have a practical way to terraform Mars. Then we discover primitive life there. Maybe it’s the equivalent of earth bacteria, hidden deep within the planet. They’ll die if we go forward with the terraforming. Do we go ahead? Are bacteria enough for us to declare the planet off-limits?

What if the Mars bacteria died out millions of years ago, but their remains contain enough DNA that we could revive them and reseed them in pockets in current Mars conditions? What if they are technically alive but in inert form and they can’t become active and survive in current or foreseeable future Mars conditions? What if they can’t become active in current Mars conditions but could millions of years in the future? What obligations if any would we have in those conditions?

5) If we reach the stars, the permutations of those choices could get even more interesting. What if we find a world we could terraform but that has an existing ecology that includes the local equivalent of earth insects? No sapient life. No prospect of sapient life. Terraformable planets are scarce. Maybe this is the only one we can reach. Earth life is vulnerable to a catastrophe. We have the power to create a backup in case something happens to destroy life on earth, but we have to destroy the primitive existing ecology in order to do it. Is that okay? What if we raise the stakes? The planet has animals equivalent to mammals, but none of them are more intelligent than an opossum. Raise them higher. Smartest animal is now equivalent to a dog. Higher: now they’re equivalent to a chimp. Where does terraforming stop being okay?

Let’s complicate that at the other end. Instead of some vague future threat to Earth, we have something happening that directly and immediately threatens Earth. How big does the threat to Earth life have to be to justify destroying the other ecology? A one percent chance? A fifty percent chance? Does it matter how advanced the animals in the other ecology are?

6) Take that thought even further. What if the planet we want to terraform had a sapient species at one time, but it was wiped out by a supernova or some other planet-wide catastrophe, along with the rest of the planetary ecology. The planet is wide-open for terraforming. Sounds straight-forward enough, but what if enough DNA, or the local equivalent, survives that we could rebuild the ecology if we wanted to? Take that a step further. Maybe the locals saw the catastrophe coming and built vaults to preserve genetic material and reseed the planet. Their mechanism to reseed the planet failed and we know how it failed. We could fix it if we wanted to. Who has the right to the planet? The now-extinct owners who would never revive without our help? Us? What if we don’t find the vaults until after we’ve terraformed? Maybe generations of humans have built their homes on the new planet. Millions of Earth animals now live on the new planet, including species endangered or even extinct back on Earth. We find the vaults. Do we have any obligation to previous owners?

Let’s add another complication. What if the vaults didn’t fail, but are set to open and recreate the ecology of the previous owners of the planet? They just haven’t gotten around to it yet. Maybe they left a safety margin between the time they calculated that the planet would be inhabitable again and when they would repopulate. We arrive during that margin time and terraform. The planet is now uninhabitable to previous owners. We find the vaults. What do we do? What are our obligations morally?

7) Another variation: What if there is no sapient species on the planet, but sapients from off-planet created vaults to preserve and reseed? Again, we arrive to an uninhabited planet, terraform, and discover the existence of a previous ecology after we’ve colonized. There are dozens of permutations to this, and as many moral issues.

8) Let’s turn that situation around. A nearby supernova wipes Earth clean of surface organisms, but it doesn’t destroy hyper-secure seed-banks and storage for animal genetic material. Enough DNA survives to recreate Earth’s ecology and us. Would visiting aliens have any obligation to do so or would it be ethically okay for them to use the planet as they desire?

9) Let’s say we have a Star Trek style Prime Directive. How far down the intellectual scale should it go? Is it okay to mine something valuable on a world with chimp-level animals? What about something higher up the intellectual scale? Where do we draw the line?

10) Going back to the terraformable planet idea: Let’s say that we get to a terraformable planet and life has been wiped out there, but the previous owners uploaded their essence into advanced computers. Is it okay to terraform their planet under them? What if they’re all dead but some of their machines still function. How intelligent would those machines have to be before we needed to consider their rights before we colonized the planet?

11) Shifting to machine intelligence: How intelligent does a machine have to be before its actions are its responsibility, not those of the people who built it? If a machine can modify its programming and exercises something approaching artificial intelligence, with a wide degree of autonomy, when does the machine become liable for injuring someone rather than it's builder or owner? Who is responsible if manufacturers set up a machines with parameters that someone manipulates, accidentally or on purpose, so that it kills someone? More autonomous machines give more opportunities for this kind of manipulation.

12) Asimov’s rules of Robotics gives a framework for ethical intelligent machine/human relationships, but as the progression of the stories show, there are loopholes. More importantly, how could we get from where we are to where those rules would be universally implemented? Is there a realistic path to all machines above a certain level of autonomy being programmed with that set of rules? How would you keep someone from building machines that function outside those rules or modifying existing machines so they no longer follow the rules? Which is more likely, the Rules of Robotics or Skynet?

science fiction, ethics

Previous post Next post
Up