Note: I wrote this in an e-mail a while ago, near the beginning of my rants on these things. So the context may be a little weird. I wanted to put it up here though since I'm posting everything.
--
Let's start by talking about the framework I want to use, since I'd like to reference it in basically everything I discuss. This framework comes from a report done by Lieutenant Colonel Artur Kuptel and Andrew Williams at MCDC NATO-ACT, available here:
http://innovationhub-act.org/sites/default/files/u4/Policy%20Guidance%20-%20Autonomy%20in%20Defence%20Systems%20MCDC%202013-2014.pdf The report is an excellent discussion of issues in use of autonomous weapons systems, however I will focus only on a few small subsections. First, it does a great job defining autonomy, as opposed to automation.
> Autonomous functioning refers to the ability of a system, platform, or software, to complete a task without human intervention, using behaviours resulting from the interaction of computer programming with the external environment.
I supplement this definition with a discussion from Paul Scharre in a paper from the Center for a New American Security (CNAS), which can be found here:
http://www.cnas.org/sites/default/files/publications-pdf/CNAS_Autonomous-weapons-operational-risk.pdf On page 9, Scharre defines three operational modes for autonomous systems, depending on the relationship of a human decision-maker in the actions of the system.
These are:
Semi-autonomous operation, or "human in the loop." An example could be the operation of a sewing machine, which manages the task of doing individual stitches, however it does stitches only when specifically instructed to by the operator.
Supervised autonomous operation, or "human on the loop." An example might be a toaster, which continues operation with the possibility of human intervention but with the general assumption that humans will intervene only in unexpected circumstances. And,
Fully autonomous operation, or "human out of the loop." For example, a thermostat, which carries out its operations based on its programming and temperature detection all day without human intervention.
These different modes of operation give rise to different sets of concerns, and have different advantages. Semi-autonomous systems make sense for standard supervised learning problems. They are unlikely to have immediate, disastrous consequences but can still perpetuate systematic biases if the nature of their decision-making is not understood. Supervised systems can be more useful in a broader variety of situations, such as for self-driving cars--where a semi-autonomous system is not very valuable in saving human labor. However in this case a malfunction may occur at speeds too rapid for humans to step in. For a more mundane example, remember how easy it is to burn your toast. Finally, fully autonomous operation is the most dangerous, as any issues that arise may not be addressed by humans until the problems have run their course. For example, a thermostat which heats inappropriately during the summer could cause a fire while residents are at work and unable to intervene.
Note how while dangers present themselves at a given level of automation, it may not make sense to reduce the level of automation because this will significantly reduce the value of automation. For example, having to set one's thermostat every time one desires a temperature change can be inefficient, compared with one that automatically warms the house slightly in the morning to help you wake up.
In order to understand how tradeoffs are to be made in the implementation of autonomous systems, we need a way to evaluate them. For this, we'll return to the NATO report. They outline four ethical dimensions of autonomous systems, which we will use to begin discussion of any system we come across. These dimensions are malfunction, misuse, unintended consequences, and benefits.
Malfunction includes many classic instances of problems with autonomy. A colorful example is given by Santa in Futurama, who declares everyone in the world naughty and attacks. A more realistic example could be given by a misclassification of a financial profile, which leads to a loan being denied to a qualified applicant. These issues are easy to conceptualize, but difficult to predict. They may also be easy to remedy after the fact; while it may not be possible to prevent an unpredicted accident, once failures are surfaced most machine learning systems can be trained on relevant examples and learn from their failings.
Misuse primarily refers to adversarial environments. This is of course very important for military application (the paper's original aim) but could apply to many situations. For example, a self-driving car that was taken over by hackers could be used to kidnap its owner.
Unintended consequences are a unique aspect of autonomous systems. In human endeavors there are natural sanity checks and guidelines that are followed by virtue of humans continuously making decisions about the external environment and understanding and reporting on their reasoning. If an algorithm for loans adds weight to certain features of an application, it could lead loan applicants to change their applications, perhaps taking out unwise credit lines before applying to affect the algorithm's underlying equations. Perhaps you've heard that self-driving cars are great if you're trying to merge, because it isn't dangerous to cut them off.
Finally, benefits are an important ethical consideration with autonomous systems. All the discussion of possible problems with autonomous systems could lead one to believe that they are never worthwhile--but any negative consequences of a self-driving car need to be weighed against the benefits of freeing up hours every day for billions of commuters and the possibility of reducing the number of fatal traffic accidents (over 30k per year in the US alone). Decisions need to value positive consequences as well as negative--while it might seem horrific to have self-driving cars result in ten thousand deaths per year, if switching to a self-driving fleet could achieve this it would save twenty thousand lives!
Of course, who makes these decisions is also an important consideration, and how liability is assigned when issues do arise works as well. While it would be a boon for humanity if we could reduce accidents to one third their current level, it is unlikely that Google would want to commit to $80 billion in liability to cover the cost of every accident in the nation.