The Adventures of Picciotto Roboto: AI & Ethics in Criminal Law

AUTHOR
Prof. Ugo Pagallo

ABSTRACT

In their 2007 Ethicomp paper, Reynolds and Ishikawa proposed three possible examples of criminal robots:

I) Their first hypothesis was “Picciotto Roboto.” The field pertains to robotic security guards as the Sohgo Security Service’s Guardrobo marketed since 2005. The case concerns a security robot participating in a criminal enterprise as a bank robbery. “As such, it seems that the robot is just an instrument just as the factory which produces illegal products might be. The robot in this case should not be arrested, but perhaps impounded and auctioned” (Reynolds and Ishikawa, 2007);

II) The second scenario is given by the “Robot Kleptomaniac.” Here, the machine has free will and self-chosen goals, so that it plans a series of robbery of batteries from local convenience stores, the aim being to recharge its batteries. Leaving aside the responsibilities of designers and producers of such robots, it is possible to claim that the unlawful conduct of the robot depends on – and is justifiable on the basis of – what is mandatory for survival. In any event, “the robot ultimately chooses and carries out the crime” (Reynolds and Ishikawa, 2007);

III) The final hypothesis is no longer a matter of imagination: the Robot Falsifier. In the mid 1990’s, the Legal Tender project claimed that remote viewers can tele-operate a robotic system to physically alter “purportedly authentic US $ 1000 bills” (Goldberg et al., 1996).

Interestingly, in How Just Could a Robot War Be?, Peter Asaro seriously assumes the hypothesis of the “Robot Kleptomaniac,” by envisaging autonomous robots that challenge national sovereignty, produce accidental wars or even make revolutions. In fact, once we admit the existence of a robot that chooses and carries out the criminal action, it necessarily follows that “autonomous technological systems might act in unanticipated ways that are interpreted as acts of war” and, moreover, they may “begin to act on their own intentions and against the intentions of the states who design and use them” (Asaro, 2008). As a result, new types of crime could emerge with robots accountable for their own actions: for example, in Criminal Liability and ‘Smart’ Environments (2010), Mireille Hildebrandt examines a machine that “provides reasons for its behaviours [in that] it has developed second order beliefs about its actions that enable itself as their author.” The self-consciousness of the robot not only materializes Sci-Fi scenarios as imagining a robot revolution and, hence, a new cyber-Spartacus. What is more, in the phrasing of James Moor (1985), the “logical malleability” of robots would end up changing the meaning of traditional notions such as stealing and assaulting, because the culpability of the agent, i.e., its mens rea would be rooted in the artificial mind of a machine “capable of a measure of empathy” and “a type of autonomy that affords intentional actions” (Hildebrandt, 2010). Today’s state-of-the-art in technology, however, suggests to go back to the case of “Picciotto Roboto” rather than insisting on the adventures of “Robot Kleptomaniac.” Although “many authors point out that smart robots already invoke a mutual double anticipation, for instance generating protective feelings for Sony’s robot pet for AIBO” (Hildebrandt, 2010), it seems more profitable to revert to the terra cognita of common legal standpoints that exclude robot criminal-accountability. For the foreseeable future, indeed, robots will be held legally and morally irresponsible because they lack the set of preconditions for attributing liability to someone in the case of violation of criminal laws. Since consciousness is a conceptual prerequisite for both legal and “moral agency” (Himma, 2007), the standard legal viewpoint claims that even when, say, Robbie CX30 assassinated Bart Matthews in Richard Epstein’s story on The Case of the Killer Robot (1997), the homicide remains a matter of human responsibility, because robots are not aware of their own conduct like ‘wishing’ to act in a certain way. Whether the fault is of the Silicon Valley programmer indicted for manslaughter or of the company, Silicon Techtronics, which promised to deliver a safe robot, it would be meaningless to put poor Robbie on trial for murder.

Still, there is no need to evaluate robots with Turing tests so as to admit a new generation of criminal cases involving human (legal and moral) responsibility and even robots’ moral accountability (as in Floridi and Sanders, 2004). In order to highlight this transformation, it is crucial to address the new responsibilities for Picciotto Robotos that participate or are employed in criminal enterprises, in that robots affect standard legal notions as ‘causality’ and human ‘culpability.’ As the field of computer crimes has shown since the first 1990’s, robots induce a “policy vacuum” (Moor, 1985), for the increasing autonomy and even unpredictability of their behaviour alter the conditions on which the principle of human responsibility is traditionally grounded. Some speak of a “failure of causation” due to the impossibility of attributing responsibility on the grounds of “reasonable foreseeability,” since it would be hard to predict what types of harm may supervene (Karnow, 1996). Others stress “strong moral responsibilities” that software programmers and engineers now have for the design of AAAs, i.e., autonomous artificial agents (Grodzinsky, Miller and Wolf, 2008). Besides a new generation of cases, such as a “semiautomatic robotic cannon deployed by the South African army [which] malfunctioned, killing 9 soldiers and wounding 14 others” in October 2007 (Wallach and Allen, 2009), it is necessary to address both legal and ethical issues of this deep transformation, by paying attention to the ways responsibility should be apportioned between designers, producers, and users of increasingly smarter AAAs.

REFERENCES

Asaro, P. (2008,) How just could a robot war be?, Frontiers in Artificial Intelligence and Applications, 75, 50-64;

Epstein, R. G. (1997), The case of the killer robot, New York, Wiley;

Floridi, L., and Sanders, J. (2004), On the morality of artificial agents, Minds and Machines, 14(3): 349-379;

Goldberg, K., Paulos, E., Canny, J., Donath, J. and Pauline, N. (1996), Legal tender, ACM SIGGRAPH 96 Visual Proceedings, August 4-9, New York, ACM Press, pp. 43-44;

Grodzinsky, F. S., Miller, K. A., and Wolf, M. J. (2008), The ethics of designing artificial agents, Ethics and Information Technology, 10: 115-121;

Hildebrandt, M. (2010), Criminal liability and ‘smart’ environments, Conference on the Philosophical Foundations of Criminal Law at Rutgers-Newark, August 2009;

Himma, K. E. (2007), Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?, 2007 Ethicomp Proceedings, Global e-SCM Research Center & Meiji University, pp. 236-245;

Moor, J. (1985), What is computer ethics?, Metaphilosophy, 16(4): 266-275; Reynolds, C. and Ishikawa M. (2007), Robotic thugs, 2007 Ethicomp Proceedings, Global e-SCM Research Center & Meiji University, pp. 487-492;

Wallach, W. and Allen, C. (2009), Moral machines: teaching robots right from wrong. New York: Oxford University Press.