Feb 19, 2009 22:26
Thus, to create a robot capable of the type of consent required for moral responsibility in risk‐taking,
we must create a Kantian‐autonomous robot-but even if that were possible, creating such a robot
cannot possibly yet be justified from an ‘acceptable risk’ ethical perspective. Relatedly, a crucial risk
to be avoided in making the deployment of robots morally acceptable is at all costs to avoid the
possibility of rampancy, i.e., an AI overwriting its own programming, at least as regards the most
fundamental aspects of its goals, such as the LOW and ROE. Such a robot would have the potential
to leave behind its imposed slave morality and become autonomous in the Kantian sense: the
programmer of its own self and own goals, or the maker of its own destiny. Not only would such
robots pose incredible risks to humans in the possibility of rampancy, but they would also be
undesirable from a military ethics and responsibility perspective: they would then move moral
responsibility from the commanding officer to the robot itself. But the refusal (and current inability)
to create a Kantian‐autonomous robot solves Sparrow’s dilemma. So for the foreseeable future, we
solve both the problems of risk and responsibility by requiring a slave morality.