Moral Responsibility for Computing Artifacts: “The Rules” and Issues of Trust

AUTHOR
FS Grodzinsky, K Miller and MJ Wolf

ABSTRACT

“The Rules” are a collaborative document (started in March 2010) that states principles for responsibility when a computer artifact is designed, developed and deployed into a sociotechnical system. At this writing, over 50 people from nine countries have signed onto The Rules. The Rules are available at https://edocs.uis.edu/kmill2/www/TheRules/.

Unlike most codes of ethics, The Rules are not tied to any organization, and computer users as well as computing professionals are invited to sign onto The Rules. The emphasis in The Rules is that both users and professionals have responsibilities in the production and use of computing artifacts. In this paper, we use The Rules to examine issues of trust.

Based on the theories of Floridi and Sanders (2001 and Floridi 2008), Grodzinsky, Miller and Wolf have used levels of abstraction to examine ethical issues created by computing technology (see Grodzinsky et al. 2008 and Wolf, et al. 2011). They used three levels of abstraction in that analysis: LoA1, the users’ perspective; LoA2, the developers’ perspective; and LoAS, the perspective of society at large. Their analysis of quantum computing and cloud computing focused on computing professionals at LoA2 delivering functionality to users at LoA1 (Wolf et al. 2011). Their emphasis was on the professionals being worthy of the trust of users in that delivery.

Our analysis of The Rules differs from the earlier analyses of quantum and cloud computing. The Rules are not a computing paradigm; they are a paradigm for thinking about the impact of computing artifacts. The emphasis in The Rules is different from a technical computing project: both users and professionals are invited to acknowledge their responsibilities in the production and use of computing artifacts. Yet there are some aspects of the earlier analyses, especially in the area of trust, that are relevant to The Rules. In quantum computing, although the implementers of quantum algorithms will not likely meet most of the users of those algorithms, nor communicate with them, the trust relationship will be forged through the medium of the quantum algorithms. The whole point of cloud computing is that the people who maintain the computing resources of cloud users are remote from the users of those resources. Humans are clearly crucial in the sociotechnical systems of cloud computing. But most of the relationships will be based on e-trust, not on face-to-face interactions. Trust issues are complex in these new computing paradigms, and it is our assertion that The Rules can inform a discussion of these issues.

The first part of this paper presents The Rules. The Rules document currently includes five rules that are intended to serve “as a normative guide for people who design, develop, deploy, evaluate or use computing artifacts.” Next we briefly examine a model of trust and the relationship between The Rules and society through the lens of trust. In other words, we will examine how computing artifacts and the sociotechnical system of which they are a part, serve as a medium through which trust relationships are played out. Then, we shall examine each rule vis a vis the sociotechnical system and trust. The existence and proliferation of computing artifacts and the growing sophistication of sociotechnical systems do not insulate users and developers from the need to trust and the obligation to be trustworthy. Instead, we are convinced that the power and complexity of these systems require us to be more dependent on trust relationships, not less. In the last section of the paper we illustrate this last statement by applying the Rules to the paradigms of quantum and cloud computing especially as they relate to issues of trust between developers and users within sociotechnical systems.

REFERENCES

Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18:303-329. doi:10.0007/s11023-008-9113-7.

Floridi, L. and J.W. Sanders (2001). Artificial evil and the foundation of computer ethics. Ethics and Information Technology 3:55–66.

Grodzinsky, F. S., Miller, K. and Wolf, M. J. (2008) The ethics of designing artificial agents. Ethics and Information Technology, 10, 2-3, (September, 2008), DOI: 10.1007/s10676-008-9163-9.

Grodzinsky, F. S., Miller, K. and Wolf, M. J. (2011) Developing artificial agents worthy of trust: Would you buy a car from this artificial agent? Forthcoming in Ethics and Information Technology.

Joy, Bill (2000) Why the future doesn’t need us. Wired (8), no. (4) 2000.

Nissenbaum, Helen (2007) Computing and accountability. in J. Weckert, ed. Computer Ethics. Aldershot UK: Ashgate, pp. 273-80. Reprinted from Communications of the ACM 37(1994):37-40.

Taddeo, M. (2008) Modeling trust in artificial agents, a first step toward the analysis of e-trust. In Sixth European Conference of Computing and Philosophy, University for Science and Technology, Montpelier, France, 16-18 June.

Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5, 2, April-June 2009.

Weizenbaum, Joseph (1984). Computer Power and Human Reason:From Judgment to Calculation. New York: Penguin Books.

Wolf, M.J., Grodzinsky, F. and Miller, K. (2010) Artificial agents, cloud computing, and quantum computing: Applying Floridi’s Method of levels of abstraction. To appear in Luciano Floridi’s Philosophy of Technology: Critical Reflections, H. Demir, ed. Springer, forthcoming in 2011.