Autonomous Weapon’s Ethical Decisions; “I Am Sorry Dave; I Am Afraid I Can’t Do That.”

AUTHOR
Don Gotterbarn

ABSTRACT

Approaches to ethical analysis can be divided in a number of ways; which ethical theory will be adopted – utilitarianism, Kantianism, Aristotleanism; which reasoning methodology with be used – algorithmic, heuristic. There is an orthogonal relation between the modes of reasoning and the ethical theories. In addressing ethical problems the analyst can take a heuristic or an algorithmic approach to a Kantian analysis. The resulting ethical judgments are maintained with varying degrees of certitude depending on the complexity of the situation. AN algorithmic approach can be automated in software. Examining the ways in which ethical judgments are being automated will shed some light on ethical analysis in general.

Computing has been used to guide, control and monitor unmanned devices. Unmanned devices have been used in a variety of sectors, e.g., searching in dangerous mines, removal of nuclear waste, etc. The military has recently advocated the use of unmanned aerial vehicles (UAV) in part because of their increased endurance and the removal of humans from immediate threat. Currently these devices still require several human operators. However, there is an effort to design systems such that the current many-to-one ratio of operators to vehicles can be inverted and even replace all operators by fully autonomous UAVs.

UAVs were originally primarily reconnaissance devices but some are now lethal weapons. They have been modified to deliver death in precisely targeted attacks and used in ‘decapitation attacks’ or targeting the heads of militant groups. They do this effectively and reduce casualties on the side using them. The decision to take a human life normally requires ethical attention. In many cases the ‘pilots’ fly these lethal missions in Iraq and Afghanistan from control rooms in Nevada, USA. There are at least two relevant ethical problems with this type of remote control. The distances involved provide a ‘moral buffer for the ‘pilot’ which reduces accountability (the need for moral analysis) and may prevent mental damage to pilot for killing others. An increase in automation has an inverse effect on accountability.

The distance and the speed of decision making also leads to ‘automation bias’. There are several types of bias which infect our decision making. Sometimes people have a confirmation bias and seek out information which confirms there prior opinion and ignore that which refutes it. Sometime we modify information which contradicts our ideas to assimilate it into our preexisting ideas. When presented with a computer solution which is accepted as correct people may have automation bias and disregard or not look for contradictory information. There are automation bias errors of omission where users fail to notice important issues and errors of commission where they follow a computerized directive without further analysis.

There is an effort develop ethical decision making models which can be fully automated in fully autonomous ethical UAVs. The UAV (robot) can chose to use force and control the lethality of the force based on rule based systems (Arkin 2009) or value sensitive design (Cunningham 2009).

The decision making process related to UAV’s is similar to the way we make ethical judgments. The management control of UAVs can be designed with several levels of autonomy. A taxonomy of autonomous decision making is analogous to the way we make ethical decisions. The level of autonomy varies depending on external information and constraint and the mode of reasoning to process this information. An analysis of the strengths and weaknesses of the levels of autonomy appropriate to UAV management decisions sheds light on the nature of ethical decision making in general and on how computers should and should not be involved in those decisions.

The analysis of UAV decision support system, uses a taxonomy of levels of autonomy in decision making, an analysis of types of decision bias, and a taxonomy of moral accountability. Using these models in the analysis of approaches to UAV automated decisions it is argued that using a single mode approach – either heuristic or algorithmic to ethical decisions is limited and is likely to lead to poor decisions. An adequate approach to ethical decision making requires both approaches. Further the use of an automated algorithmic approach (implemented in software) to track and reduce the complexity of a problem needs to address automation bias and insure the presence of ethical accountability.

REFERENCES

-Arkin, R.C. “Ethical robots in warfare,” Technology and Society Magazine, IEEE Volume 28, Issue 1, spring 2009

– Collins, W. and Miller, K. “The Paramedic Method”, Journals of Systems and Software, 1992

-Cummings, M.L., Automation Bias in Intelligent Time Critical Decision Support Systems, AIAA 1st Intelligent Systems Technical Conference, September 2004.

-Gotterbarn, D. “Informatics and Professional responsibility”, in Computer Ethics and Professional Responsibility eds. Bynum, T. and Rogerson, S.

-Nissenbaurn, H. “Computing and Accountability,” Communications of the ACM Volume 37 , Issue 1 (January 1994)

-Parasuraman, R., Sheridan, T.B, and Wickens, C.D. 2000. A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics. Part A: Systems and Humans, Vol. 30, No. 3, pp. 286-297. May 2000.

-Ruff, H.A., S. Narayanan, and M.H. Draper. 2002. Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles. Presence 11: 335–351.

-Sharkey, N., “Death strikes from the sky: the calculus of proportionality, ” Technology and Society Magazine, IEEE Volume 28, Issue 1, Spring 2009

-Sparrow, R. “Predators or plowshares? arms control of robotic weapons,” Technology and Society Magazine, IEEE Volume 28, Issue 1, Spring 2009