Computers as Surrogate Agents


Deborah G. Johnson and Thomas M. Powers


Computer ethicists have been intrigued by the idea that computers might develop to a point at which they have intelligence of a kind that would justify treating them as moral agents, both in the sense that we would be reluctant to turn them off and in the sense that we would consider them responsible for good and evil deeds. Claims about this possibility generally lead to debates about the nature of intelligence (artificial and human) and agency. A “moral Turing test” has even been proposed to identify which computer systems might have moral intelligence or agency.

Generally, the debate over whether computers could ever be moral agents has presumed a concept of moral agency in which an individual acts with a first-person point of view. According to this account, an individual’s actions are directed at fulfilling personal desires and interests, and morality is a constraint on how those interests can be pursued, in light of the interests of others. However, there is a special, distinct kind of human moral agency known as ‘role morality’ in which individuals act as agents of others. The agent acts in this case within the constraints of a role and as the representative of others; role or “surrogate” agents pursue the desires and interests of their clients, not themselves. This is not to say that individuals acting as agents have no personal interest in their actions. Rather, the individual’s personal interest is in fulfilling the role and this involves acting on behalf of a client. We will refer to this form of moral agency as ‘surrogate agency.’ Surrogate agents act in fairly well specified social roles, such as stockbroker, lawyer, and manager of performers and entertainers.

In this paper we will argue that surrogate agency is a good model for thinking about the morality of computer agents. We will identify and delineate the ways in which computer agents are like and unlike human surrogate agents and what these similarities and differences mean for the morality of computer agents.

When a surrogate agent is hired by a client, the agent is authorized to engage in a range of activities directed at achieving a positive outcome for the client. Similarly, computer agents are put into operation to engage in activities aimed at an outcome desired by the user, that is, the person who deployed the computer program. Not only are human surrogate agents and computer agents both directed towards the interest of their client/users, both are given information by their client/users and expected to behave within certain constraints. For human surrogate agents, there generally are social and legal understandings such that when the behavior of a surrogate agent falls below a certain standard of diligence or authority, the client can sue the agent and the agent can be found liable for his or her behavior. This suggests that standards of diligence and authority should be developed for computer agents, perhaps before they are put into operation.

Several important issues arise when we look at surrogate agency and try to transfer what we learn there to computer agents. Of particular importance is the role of information in the proper functioning of a surrogate agent. An agent can properly act on behalf of a person only if the agent has accurate information relevant to the performance of the agent’s duties. For human surrogate agents, the responsibility to gather and update information often lies with those agents. For computer agents, the adequacy of information seems to be a function of the program and the person whose interests are to be served. Of course, the privacy and security of this information, in digitized form, are well-known concerns for computer ethics. A new aspect of information privacy and security is raised by computer surrogate agency: can computer programs “know” when it is appropriate to give up information (perhaps to governments or marketing agencies) about their clients?

In the last part of the paper, we will consider some broader issues that follow from our account of surrogate human and computer agency. What are the rights and responsibilities in the relationships between human surrogate agents and their clients? How far can surrogate agents go to achieve the wishes of the client? If the surrogate agent is acting on behalf of a client, is this agent absolved of moral responsibility for the action and its consequences? Transferring to the case of a computer agent leads to questions about the rights and responsibilities of computer agents and users, limits that might be placed on computer agents, and the accountability of computer agents and/or their designers and the form this accountability might take.