Ethical issues related to robots are cutting the edge nowadays. It suffices to mention the “Roboethics Roadmap” of the European Robotics Research Roadmap (EURON) from 2007, in which:
- The specificity of robotics is stressed and three “main positions” for roboethics are proposed; namely, disinterested in ethics, interested in short-term ethical questions, and involved with long-term ethical concerns;
- A roboethics taxonomy is presented by distinguishing humanoids, advanced production systems, adaptive robot servants, network and outdoor robotics, health care and life quality robots, military robotics, and edutainment-related robotics);
- The particularity of the problems arisen in each particular field is pointed out: For example, dealing with humanoids, we should tackle the reliability of their internal evaluation systems, the unpredictability of their behavior, the traceability of evaluation and actions procedures, as well as matters of safety and security. While “wrong action can lead to dangerous situations for living beings and the environment, (…) ill-intentioned people [could] modify the robot’s behavior in dangerous and fraudulent ways.” (EURON’s Roboethics Roadmap, 7.1.4)
In order to analyze some of the ethics-related issues of robotics in today’s debate – foremost, liability and agency – this paper is presented in five sections.
First of all, I suggest to adopt a legal perspective: Let aside Leibniz’s seminal remarks on machines and the law, there is in fact a long and well-established tradition on this topic. The “Automaton’s law” was a very popular subject among German scholars in the late 1800s: See for instance Günther’s Das Automatenrecht from 1891, Schiller’s Rechtsverhältnisse des Automen from 1898, and Neumond’s Der Automat from 1899. Hence, this tradition would provide a common framework which represents a good starting point for dealing with robots and ethics. Since “there is no single generally accepted moral theory, and only a few generally accepted morals (…), the legal framework provides a system for understanding agency and responsibility, so we will not need to wait for a final resolution of which moral theory is ‘right’ or what moral agency ‘really is’ in order to begin to address the ethical issues currently facing robotics.” (Asaro 2007, p. 2)
In the second section, I examine a more recent proposal, applying elements of Ancient Roman legislation on slaves to autonomous agents like robots.
Following Andrew Katz (2008), the analogy is really instructive: “Like a slave, an autonomous agent has no rights or duties itself. Like a slave, it is capable of making decisions which will affect the rights (and, in later law) the liability of its master. By facilitating commercial transactions, autonomous agents have the ability to increase market efficiency [via their peculium]. Like a slave, an autonomous agent is capable of doing harm.” (op. cit., p. 3)
In section 3, I stress some flaws of the analogy and, more particularly, of the parallelism between the status of a slave as a ‘thing’ and the opinion that robots are ‘things.’ On one hand, Ancient Roman law is much more complex than what is generally presented by current experts in robotics. On the other hand, the risk of the analogy is an anthropocentric standpoint which falls short in coping with today’s ethics-related issues of robotics. However, the analogy catches one important aspect, in that robots’ behavior should be considered in terms of alternative forms of legal responsibility for others’ behavior (e.g., tort law and vicarious liability in the common law tradition and its counterpart in civil law systems, i.e., objective responsibility).
In section 4, I explain why this new form of liability, pace Asaro, suggest we rethink the traditional legal framework in the light of the current debate in ethics. A good standpoint is offered by Floridi and Sanders’ (2004) remarks on “the morality of artificial agents” so as to properly define the idea of agency and to separate the concerns of morality and responsibility of the agents. Moreover, the specificity of both ethical and legal issues concerning robotics is examined in connection with Bynum’s (2006) general account of the nature of information ethics and the idea that what is good or bad, even in robots’ behavior, can be defined as anything that improves or damages the informational nature of the universe.
The conclusion is that the analogy of Roman slaves should be cautiously used both for legal and ethical reasons: While some scholars (Moravec, 1999) have already suggested a sort of 21st Century Hegelian-like master-slave dialectics, it is important to understand the uniqueness of the problems we are going to face in terms of moral agency and legal responsibility for others’ behavior.
Asaro, P. (2007), Robots and Responsibility from a Legal Perspective, Proceedings of the IEEE Conference on Robotics and Automation, Workshop on Roboethics, Rome, April 14, 2007.
Bynum, T. W. (2006), Flourishing Ethics, Ethics and Information Technology, 8, pp. 157-173.
Euron Roboethics Roadmap (2007), available at http://www.roboethics.org/icra2007/contributions/VERUGGIO%20Roboethics%20Roadmap%20Rel.1.2.pdf
Floridi, L., and Sanders, J. W. (2004), On the Morality of Artificial Agents, Minds and Machine, 14, 3, pp. 349-379.
Katz, A. (2008), Intelligent Agents and Internet Commerce in Ancient Rome, Society for Computers and Law, published on line 15.10.08.
Moravec, H. (1999), Robot: Mere Machine to Transcendent Mind, New York, Oxford University Press.