In this paper I discuss the attempt by Coleman (1996) to suggest a mechanism for attributing a Kantian morality to computers. This is part of a larger project of Coleman to posit the likelihood of computers being moral agents. Here I will provide an outline of Coleman’s account, show what I think is wrong with it, what I think is right with it, and, lastly, suggest an alternative approach to computers being Kantian moral persons
In my outline of Coleman’s account I provide a non-evaluative summary. Coleman begins her account with the thought that at some point we do need to stop and consider the moral status of computational machines. She goes on to say that a great deal of the literature on the subject misses an important point when authors show that computers can, or cannot, be moral in the human sense. That is, the authors try to attribute human moral characteristics to computers. She then goes on to give an account of Kant’s moral theory including his conception of realms. She separates the notions of freedom and autonomy and describes how Kant reconciles freedom with determinism by putting freedom in the intelligible realm. She then makes the analogy of Kant’s realms with that of the components of computers (hardware and software). Coleman presents a strategy for being able to (re)create the Categorical Imperative in computers. She then turns her attention to problem solving and considers a number of common problem solving strategies. She says that they all have logic as their basis and that computers are likely to be able to implement them, being the ultimate logic machine (for this she uses the idea of a Turing Machine) that they are. For this she takes a programming approach calling the programming of problem solving and introduces a number of programs, Prational, Papply, Putility and Pinterest for deriving moral principles. Eventually she concludes that computational personhood is possible.
After this summary I go on to critique her account. The shortcomings I find are: moral considerability vs moral persons, computer parts vs realms, freedom vs logic, understandability, with vs from, and, what she calls, missing bits. I will show why I think that these parts of her paper are problematic and introduce an alternative approach.
I claim that Coleman makes a straightforward mistake in conflating consideration with personhood. While all living (and possibly some others as well) entities are due moral consideration of some sort, this does not mean they are a person. We consider all sorts of entities (trees, dogs, etc.) when carrying out moral deliberation but none so far have claimed that trees are persons of any sort especially moral persons. So that cannot be what she means. This leaves the thought that the entity in question is considerable, that is, it is deserving of consideration. Coleman has something particular in mind when she says that an entity gains moral considerability, enough to make them persons. That something is the distinction between acting in accord with and from the moral law. This is the distinction that Kant thought made for moral persons.
In order to make the idea that computers might be Kantian moral persons seem more plausible, Coleman asks how a computer might fit in with Kant’s sensible and intelligible realms. She draws an analogy between the parts of a computer (specifically a Turing machine) and the realms. I show that the use of Turing Machines in this context is faulty. I also show that the hardware/sensible and software/intelligible analogy is flawed and offer an alternative interpretation.
Coleman also faces difficulties in her proposal of programs for implementing moral strategies. She suggests that a program, named Prational, be created to implement Kant’s moral theory. She then goes on to point out a clear difficulty with this and suggests a corrective: incorporating programs for deriving moral principles. But these correctives are not without their own problems, notably the stopping problem. As well there are more things missing from Coleman’s account. The first is any detail on what might constitute Prational. Much would need to be said to make the program seem possible. The second is that there is simply nothing about the different kinds of beings that could be considered to be rational.
Given that Coleman’s approach has its difficulties why have I bothered discussing it at all?
It turns out that there are some good things to take away from all this. Coleman has provided an interesting and initial sounding board from which to pursue Kant’s moral theory as it might be applied to computers. However much more needs to be said to cache out the ideas.
In the final part of the paper I introduce and sketch an alternative approach that can address some of Coleman’s shortcomings such as the moral considerability problem. I use Floridi and Sanders’ notions of Levels of Abstraction as well as Perry 6’s idea of a Ladder of Autonomy to propose a schema called artificial ethics (Æ) whereby agents can be assessed to determine whether they are indeed moral agents. This schema allows the possibility of any being to be considered for moral agency. This is done along a spectrum of 2304 possibilities and without having to resort to ascriptions of human moral agency. It also avoids the temptation of asking questions that make comparisons between differing moral beings. Questions such as: Are computers like animals? Are computers like human children? are rendered unnecessary, perhaps even pointless. I discuss the merits of my approach.
Moral beings can be, individually and independently, compared against the criteria to see where they fall. This is important because a moral being ought to determine its behaviour towards other beings based on the nature and extent of the moral concern that attaches to other beings. The mere fact of where something falls within the schema is of itself of no particular moral significance.