Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to be a Moral Agent?

AUTHOR
Himma Kenneth Einar

ABSTRACT

The idea of agency is conceptually associated with the idea of being something capable of doing something that counts as an act. Agents are intentional beings that perform acts and hence do things. People and dogs are both capable of performing acts; people are, while dogs are not, rational agents because only people can deliberate on reasons, but both seem to be agents. In contrast, trees are not agents; trees grow leaves, but growing leaves is not something that happens as the result of an act on the part of the tree.

One can distinguish natural agents from artificial agents. Some agents are natural in the sense that their existence can be explained by purely biological processes and states; people and dogs are natural agents insofar as they exist in consequence of biological reproductive capacities – and are hence biologically alive. Some agents might be artificial in the sense that they are manufactured by intentional agents out of pre-existing materials that are external to the manufacturers; such agents are artifacts. Highly sophisticated computers might be artificial agents; they are clearly artificial and would be artificial agents if they satisfy the criteria for agency.

Although only an agent can be a moral agent, agency is different from moral agency. The idea of moral agency is conceptually associated with the idea of being accountable for one’s behavior. To say that one’s behavior is governed by moral standards and hence that one has moral duties or moral obligations is to say that one’s behavior should be guided by and hence evaluated under those standards. Something subject to moral standards is accountable (or morally responsible) for its behavior under those standards.

These are comparatively uncontroversial conceptual claims (i.e., claims about the content of the concept). As Routledge Encyclopedia of Encyclopedia explains the notion, “[m]oral agents are those agents expected to meet the demands of morality.” According to Stanford Encyclopedia of Philosophy, “a moral agent [is] one who qualifies generally as an agent open to responsibility ascriptions.”

It is generally thought that, at the most general level, there are two capacities necessary and jointly sufficient for moral agency. The first capacity is the capacity to freely choose one’s acts. The idea here is that, at the very least, one must be the direct cause of one’s behavior in order to be characterized as freely choosing that behavior; something whose behavior is directly caused by something other than itself has not freely chosen its behavior. If, for example, A injects B with a drug that makes B so uncontrollably angry that B is helpless to resist it, then B has not freely chosen his or her behavior. Only an agent can be a moral agent – and, indeed, only a free agent.

The second capacity necessary for moral agency is related to rationality. As traditionally expressed, the capacity is knowledge of right and wrong; someone who does not know the difference between right and wrong is not a moral agent and not appropriately censured for her behaviors. This is, of course, why we do not punish people with severe cognitive disabilities like a psychotic condition that interferes with the ability to understand the moral character of her behavior.

This requires a number of capacities. First, and most obviously, it requires a minimally adequate understanding of moral concepts like “good,” “bad,” “obligatory,” “wrong,” and “permissible” and thus requires the capacity to form and use concepts. Second, it requires an ability to grasp at least those moral principles that we take to be basic – like the idea that it is wrong to intentionally cause harm to human beings unless they have done some sort of wrong that would warrant it (which might very well be a principle that is universally accepted across cultures). Third, it requires the ability to identify the facts that make one rule relevant and another irrelevant. For example, one must be able to see that pointing a loaded gun at a person’s head and pulling the trigger implicates such rules. Finally, it requires the ability to correctly apply these rules to certain paradigm situations that constitute the meaning of the rule. Someone who has the requisite ability will be able to determine that setting fire to a person is morally prohibited by the rule governing murder.

While the necessary conditions for moral agency as I have described them do not explicitly contain any reference to consciousness, it is reasonable to think that each of the necessary capacities presuppose consciousness. The idea of accountability, which is central to the meaning of “moral agency,” is sensibly attributed only to conscious beings. It seems irrational to praise or censure something that isn’t conscious – no matter how otherwise sophisticated its computational abilities might be. Praise, reward, censure, and punishment are rational responses only to beings capable of experiencing conscious states like pride and shame.

This paper will explore the issue of whether consciousness is a necessary condition for moral agency and, if so, what this tells us about the possibility of artificial agents that are moral agents in the sense that they are properly held accountable for their behavior. The essay will begin with an analysis of the concepts of agency, artificial agency, natural agency, and moral agency. It will continue with a meta-ethical analysis of the properties something must have to be accountable for its behavior and hence to be a moral agent. It will then consider the issue of whether consciousness is implicitly presupposed by these conditions. Finally, the essay ends with an application of the preceding analysis to the question of whether computers or artificial agents can be moral agents.

The substance and methodology of this essay is fairly characterized as multi-disciplinary, drawing together elements from conceptual analysis, meta-ethics, information ethics, philosophy of mind and philosophy of computing. It will also draw on some of the existing literature on artificial and moral agency. The paper seeks to shed light on important moral problems that will arise as computing technologies become ever more complex, powerful, and sophisticated.