‘The Machine Made Me Do It!’ An Examination of the Possible Moral & Legal Agency of Intelligent Computer Systems

AUTHOR
Hannah Haviland

ABSTRACT
This paper will explore the ethical and legal implications of human computer dependency, concentrating on the possibility of considering intelligent computer decision support systems (DSS) to be agents in both the moral and legal sense, thus making them partially responsible for their decisions. By utilising case studies of the real-life, widespread use of advanced DSS by professionals in the healthcare, finance, and insurance industries, I will be able to examine how the concepts of free will, autonomy, and rationality mediate the related but separate concepts of moral agency and responsibility, from both an ethical and a legal perspective. Case studies and scenarios ground the research, which is by nature hypothetical, in the real world. Ethicists are increasingly paying attention to the questions of moral agency and responsibility that might arise in intelligent computers systems; however, not much current work analyses the issue using concrete examples of the issues in those sectors in which it is most prevalent. In the process of highlighting current and potential problems in DSS use, the paper also hopes to make an original contribution to current Best Practices dialogue.

At present, few philosophers and even fewer legal scholars would argue that DSS can be considered agents in either the moral or legal senses, for a number of reasons. Such systems lack the supposedly solely ‘human’ criteria necessary to establish agency, namely self-awareness, moral reasoning ability, rationality, and autonomy (and the list is open to debate, depending on one’s perspective; e.g., Kantian vs. utilitarian). From a traditional ethical perspective, such intelligent computer systems lack the criteria for moral agency, even if as technological artefacts they do reflect and reinforce values. (Even in the case of human moral agency, ethicists would be hard-pressed to agree on the specific reasoning paths that agents must follow in order to deliberate ethically). From a traditional legal perspective, legal responsibility for the quality and impact of such computer systems falls most often somewhere along a spectrum from the system’s user to its vendor to its designer. The same spectrum is often used in most traditional ethical analyses to assign moral responsibility and, to a lesser extent, moral agency.

But the advent of artificial intelligence (AI), including the development of ‘affective computing,’ appears to be chipping away at the traditional building blocks of moral agency and responsibility. Spurred by the realisation that fully autonomous, self-aware, even rational and emotionally-intelligent computer systems may emerge in the future, professionals in engineering and computer science have historically been the most vocal to warn of the ways in which such systems may alter our understanding of computer ethics (e.g., Weizenbaum 1995; Picard 1997). Now tech-minded philosophers are beginning to show interest (e.g., Allen et al. 2000; Bechtel 1985; Snapper 1985; Moor 1995). In fact, many of the ethically-concerned in these fields have discussed the possibility and feasibility of building ‘artificial moral agency’ (AMA) into For example, modern DSS in healthcare and finance feature, more often than not, some level of AI. [The brevity of this abstract does not permit more than a brief review of the case studies; for illustrative purposes, I will summarise just one case study that I will present from the healthcare industry.] In healthcare, GIDEON, Help, and Iliad are just a few of the AI systems in routine use in primary care and point of care (systems listed in the Open Clinical directory, made by Enrico Coiera. Available on-line at http://www.openclinical.org/aisinpractice.html. Healthcare professionals rely to various degrees on these systems for diagnosis, treatment routes, and education. The design and operation of these systems poses a number of potentially problematic issues. These issues influence the way the user operates and navigates the system, and how she interprets and applies the information or decisions it provides. Here I cite just a few issues that can arise when a doctor uses an advanced DSS system. For example, DSS can restrict the doctor’s treatment or diagnostic options to only those options programmed into the system at its conception. In addition, the architecture of the system’s reasoning and logic paths is seldom transparent, which means that the doctor might not be able to understand why the DSS reached its decision. In short, the DSS frames the doctor or healthcare professional’s decision-making process, to various degrees. Does this have any effect on the conditions required for moral agency? Does it affect the criteria for moral or legal responsibility, or both?

Of course, DSS is essentially a tool. The doctor has direct contact with-and ultimate responsibility for-the patient, and no doctor would be acting legally or ethically soundly if she either implemented or ignored the DSS’ recommendations per se, without further deliberation. But, given the very possible shortcomings of such systems, shortcomings which mediate the doctor’s ability to evaluate the soundness of the decision, should the doctor still be classified as the moral agent? Should she bear all moral or legal responsibility? Should the hospital bear partial moral responsibility, if by law it must share legal responsibility? What about system’s programmer? Or its vendor? Given that many healthcare providers today rely on (if not demand) DSS largely for utilitarian considerations (both to shorten and improve treatment, to decrease education costs, etc., and given that the legal risks to the doctor that such a system poses, it would seem that the issues of moral and legal agency deserve a much closer look, one which also considers as yet unexplored alternatives. As these issues arise in all instances where advanced DSS are in use today, the research has widespread relevance.