Socially Responsible (Moral) Autonomous Software Agents

AUTHOR
Richard Lucas

ABSTRACT

“I’m sorry Dave, I can’t do that”

Is it possible or even desirable for Autonomous Software Agents (ASAs) to be socially responsible? Since it is usually agreed that to be socially responsible implies a sense of morality some steps towards answering this large question can be taken by asking the smaller questions; ought and can ASAs be ethical? These questions I will answer in this paper.

There are two claims and a discussion resulting from these claims in this paper.

The two claims are: Firstly, we ought to demand that an ASA be morally responsible and secondly, ASAs, currently, are not morally responsible. That these claims are at odds leads to the discussion of how the issues provide a challenge for the citizen of the Information Society

The first claim
This claim stems from the commonplace observation that we are relinquishing more and more control of our lives to computer-controlled technology (ie intensive-care units, autopilots, and the like). The consequence of this relinquishing is that we are taking less and less active part in decisions which have moral import (Do I crash into a building or a cornfield?).

Should we do this? That is to say, ought we to consider more carefully the degree to which we give computers effective control over morally charged parts of our lives and why? This further leads to questions such as: What moral controls ought to be built into computers? and What does this mean for our notions of moral responsibility? The origin and implication of these questions is explored in this paper. These matters will form the basis of an examination of how the issues provide a challenge for the citizen of the Information Society, that is why it matters that ASAs are being used to do things for people.

The second claim
I do not make the strong assertion that it is not possible for any ASA to be morally responsible but rather the more modest one that, by way of example, at least two attempts to imbue ASAs with morals fail.

To substantiate the second claim I will use two models namely, Asimov’s 3 Laws of Robotics and the BDI Model of Software Agency.

For the 3 Laws of Robotics, I will use the example of the September 11 disaster to examine the effectiveness of such a computer. Would, on September 11, an autonomous moral computer have resulted in better consequences, where thousands of workers and passengers might not have died? It seems that some kind of computer control, moral computer control, would be just the thing in situations like this. I will show that this Æ could be circumvented. All three laws, in this example, are either easily bypassed or are potentially trapped in a deep if not infinite regression of conditions and exceptions.

For the BDI model of software agency, I will show that it flounders conceptually. I will examine the AI field’s (or at least that part which proposes ASAs) conception of autonomy but will concentrate especially on their conceptions of belief, desire, and intention and show that the uses and definitions (implied and explicit) for these are sometimes contradictory, at other times confusing, but always short of any reasonable persons expectations of how these terms ought to be used, especially in moral discourse.

The failure of these examples leads to the suspicion that the problems which undercut them lie deep. Deep in the nature of: the sorts of beings that can be moral, moral theories, and the notions of responsibility and control. These conclusions also lead to the same place as those of the first claim; what are the challenges and why it matters that ASAs are being used to do things for people.

Progress

If these failed attempts are symptomatic of the enterprise as a whole then we ought to be lead to: a) reexamine the first claim, b) examine what the current (moral) state of computers means for us, c) examine what is possible (morally speaking) for ASAs, and d) develop a preliminary sketch for a possible typology for assessing moral theories to see if they that would be appropriate for ASAs.

This paper discusses a) and b), but not c) and d). As a portent to future papers, c) and d) lead to the idea of a classification of moral theories that would be appropriate for ASA’s and a typology of such theories. The classification of such possible moral theories I have called Artificial Ethics (Æ). These are not to be confused with Danielson’s Artificial Morality, though there may very well be theories which fall into both camps. I point to the work of Coleman on Kantian computers and, Van den Hoven and Lokhorst on deontic logics as possible candidates for inclusion in the Æ schema.

Concluding remarks

Perhaps the expectations implied by merely asking the opening question is setting the bar too high; demanding too much of the entities that are, by default, taking control and responsibility. But, it seems that what is being asked of computers is, and, ought to be, no more demanding than what we would ask of people. What would be the point of making the demands lower for computers than people? The answer to this questions seems to imply that accepting less in the way of moralizing is to create a new class, morally constrained entities. Making the standards higher for computers than people, while initially seeming attractive is also problematic. Just how high do we make any such standards and, are we risking creating moral superiors and perchance, moral saints? This, incidentally, is what futurists such as (Warwick, 2000) and (Moravec, 1999) predict, encourage, indeed embrace.