Gaetano Aurelio Lanzarone and Federico Gobbo
Attempting to formalize ethical knowledge and reasoning serves two purposes: understanding human ethics and designing computer ethics. While the former is descriptive, subject to the intricacies of human behaviors and scarcely prone to systematic experimentation, the latter is prescriptive, can be experimented with little limitations and has to do with ‘ideal’ ethical behavior, which restricts the class of models we are interested in. The need to instill ethical guidance into artificial agents, besides its speculative interest, is related to the practical problems arising from the building of (semi-)autonomous intelligent robots, to be deployed not only in special environments inaccessible to humans but also living within the human environment. Logical and computational formalization of ethics could be useful for both artificial agents and human agents designing them.
Roughly, two main approaches are possible. In the ‘axiomatic’ approach, a set of rules is established, from which an agent derives ethical behaviour. In the ‘situated’ approach, the agent is immersed within an environment from which it informally absorbs good behaviour. Both approaches have advantages and pitfalls. In the former, the human desiderata can be expressed but their context-dependent application is far from guaranteed; the dangers of hard coding behavioral rules are well known. In the latter approach, creating ethical robots that learn from scratch is difficult, since such learning does not scale up from simple to more complex capabilities; ethical rules cannot be dispensed with entirely, otherwise un-principled behaviours might emerge. However, it is commonly understood that the two approaches can usefully coexist, and it may be useful to adopt a mixed approach.
Several authors (e.g. Roger Clarke) have discussed, in guise of a thought experiment, Asimov’s Laws of Robotics as first attempts at programming robot ethics and how Asimov’s robot stories explore the implications of these laws, reaching the conclusion that serious doubts arise about the possibility of devising a set of rules providing reliable control over machines. Two main problems have been evidenced. The first is related to ambiguities in the language used, so that the robot does what it was told, but not what was intended. For example, an ambiguity is the definition of injury in the first law, the robot having to take into account psychological injury as well as physical. The second problem has to do with conflicts among the laws and within a single law; the prioritization of the ethical rules may lead to exceptions being invoked because one value is deemed more important than another. Thus, while a rule has to be followed prima facie, exceptions are rules to resolve collisions between values in practical circumstances. For example, telling the truth is right and deception is wrong, except when lying is acceptable behaviour, e.g. lying to avoid causing another person damage. Exceptions to exceptions can also arise; for example, hurting other people is wrong, except when acting in self-defense, but only unless the self-defensive reaction is not disproportioned.
Artificial Intelligence has developed a rich set of methods of knowledge representation and reasoning, which can be adopted in ethics. The paper will examine some of them and discuss their application to mitigate the problems encountered in the axiomatic approach. The method we discuss is combining ethical rules with empirical knowledge acquired by concrete experience. Obeying ethical rules being similar to abiding by laws, we consider an approach developed for the interpretation of open-textured terms in legal rules, based on precedent cases, and an extension of this approach based on analogical reasoning, abstraction hierarchies and reasoning by default with exceptions (see references below).
Explanation-Based Learning (EBL) is a machine-learning technique which creates generalizations of given examples on the basis of background domain knowledge. We consider EBL’s domain knowledge as corresponding to ethical rules, and EBL’s training examples as corresponding to precedent cases. By making the interpretation of vague terms as guided by precedents, we use EBL as an effective process capable of creating a link between terms appearing as open-textured concepts in ethical rules, and terms appearing as ordinary language wording for stating the facts of a previous experience. As extensions to standard EBL, we consider precedent cases supplemented by additional heuristics, represented by abstraction hierarchies with constraints and exceptions.
The axiomatic and the situated approaches are thus reconciled. On the one hand, without empirical experience, rule-based ethical systems cannot determine whether open-textured terms in rule antecedents match the current situation to be decided and acted upon. On the other hand, without the guidance of general rules, precedent cases are only fragmented knowledge, unsuitable to being carried on to new situations; analogical reasoning and abstraction principles are needed to fill in the knowledge gaps, for example, by noting similarities and considering more general classes encompassing the concepts of both rules and cases.
A provisional conclusion of the paper is that, while computer ethics does not seem amenable to finite axiomatizability and therefore is un-computable in general, at least part of it could be computed by supplementing ethical rules with empirical experience. Studying how far the frontier of the computable part of computer ethics can be pushed, the nature of the un-computable ‘residue’ (as Alan Turing called it) might appear more clearly.
The paper will give illustrations and examples of the aforementioned techniques, together with relevant citations of other authors’ work.
Costantini S., Lanzarone G.A., Explanation-Based Interpretation of Open-Textured Concepts in Logical Models of Legislation, Artificial Intelligence and Law, vol.3, n.3, Kluwer Acedemic Publisher, 1995, 191-208.
Costantini S., Lanzarone G.A., Metalevel Representation of Analogical Inference, in: Ardizzone E., Gaglio S., Sorbello F. (editors), Trends in Artificial Intelligence, Lecture Notes in Artificial Intelligence n. 549, Springer-Verlag, 1991, 460-464.
Costantini S., Lanzarone G.A., Sbarbaro L., A Formal Definition and a Sound Implementation of Analogical Reasoning in Logic Programming, Annals of Mathematics and Artificial Intelligence, 14, 1995, 17-36.