Toward Common Sense Ethic for Discreet Agents

AUTHOR
Jean-Gabriel GANASCIA

ABSTRACT

A few months ago, in March 2006, at the AAAI Stanford Spring Symposium entitled “What Went Wrong and Why: Lessons from AI Research and Applications” there was a session dedicated to intelligent agents. One of the talk presented experiments with “elves” that are personal agents acting as efficient secretaries and helping individuals to manage their agenda, to fix appointments, to find rooms for meetings, to organize travels etc. The talk reported technical successes but difficulties with inappropriate agent behaviours. For instance, one day – or, more precisely one night –, an elf rang his master at 3am to inform him that his 10 o’clock plane had to be delayed. Another was unable to understand that his master was in his office for nobody, since he had to complete an important project… Many of these inappropriate behaviours render intelligent agents tiresome and distressing. Our goal is to contribute to design clever and discreet agents acting with discernment and judgement by formalizing ethical rules of behaviour that use non monotonic logics.

Multiple Principles

During the past, there were many attempts to build computational ethics, i.e. procedures defining ethics for artificial agents or robots (Alan Aaby 2005, Luciano Floridi and Jeff Sanders 2005). More precisely, computational ethics models ethical systems by the use of programs and simulates decision procedures with physical information systems, i.e. with computers. Inspired by Asimov’s short story “Runaround” written in 1942 (Isaac Asimov 2004), the ethics for artificial agents studies the rules on which robots have to rule their behaviour to be ethically admissible. For instance web agents have to respect privacy; agents in hospitals have to respects patients and their pain etc.

However, one of the difficulties we face when writing rules of behaviour for intelligent agents is that the requirements are numerous and sometimes contradictory. For instance, we want personal robots act as faithful dogs who have to defend and help their master. Simultaneously, we need to protect our privacy by restricting access to personal data. But, we also demand the robot to behave ethically, i.e. to say the truth whenever someone ask them and not to increase information entropy by divulging wrong information. Those three requirements are somehow contradictory, since security of people demands total transparency while personal servants have sometime to lie to protect their master intimacy.

As a consequence, agents who pretend to be discreet have to obey to multiple and independent principles that may appear to be contradictory. But, it is difficult to automatically manage inconsistent rules of behaviours and to find, in each situation, the one that is the most adapted to the situation. The notion of “common sense reasoning” has been developed in artificial intelligence to face a similar problem. Therefore, our aim is to propose a “common sense ethic” based on “common sense reasoning”.

Common Sense Ethic

One of the main problems the logic-based artificial intelligence has to deal with is to conciliate the specificity of singular cases with general rules. Depending on the domain of application, it has got different names: “frame problem”, “common sense reasoning”, etc. When this problem is applied to solve ethical dilemma, I propose to name it “common sense ethic”. To be more precise, let us take an example related to an ethical question.

A general ethical principle is that we always have to say the truth. But a more specific say that you don’t have to say the truth to someone who doesn’t deserves it. For instance, imagine that you have been living in France during the Second World War, under the Occupation, and that you hid a friend, wanted by French militia or the Gestapo, in your home. If you were asked where your friend had been, would you obey to the general rule that commands to tell the truth, and to denounce the man to the authorities? We name “common sense ethic” a system of conflicting ethical rules, where the most specific has to apply to the current situation. For instance, in case of lying, a first rule commands to say the truth to everybody while a second orders to not say the truth to a person who deserves it. However, such a system is contradictory since the general rule may be applied in every situations.

Modelling Common Sense Ethic with Artificial Intelligence

Modern logic-based artificial intelligence techniques have been developed to solve this kind of problem within a logical framework. More precisely, the goal of logic-based artificial intelligence techniques is to satisfy rules if they don’t lead to contradictions, while being able, in cases of contradictions, to cancel the effects of inconsistent rules.

In the past, many Artificial Intelligence researchers tried to simulate non-monotonic reasoning, i.e. reasoning based on general rules and accepting exceptions. Several formalisms have been developed, for instance, default logic (Raymond Reiter 1980), circumscription (John McCarthy 1980), non-monotonic logics (Drew McDermott and Jon Doyle 1980), Truth Maintenance Systems, etc. However, most of the mechanical solvers based on those formalisms were very inefficient. Recently, a new efficient and general formalism called Answer Set Programming (ASP) (Chitta Baral 2003) has been developed to simulate non-monotonic reasoning. It has been designed to unify previous non-monotonic reasoning formalisms.

Our purpose in this paper is to show how non monotonic logic may model “common sense ethics” for intelligent agents. It will present the way ASP, which simulates default reasoning, could provide a clear formalization of the way multiple principles of “common sense ethics” can be managed in order to solve particular cases. Such a formalisation may be useful to design discreet intelligent agents; it would then be of practical use. But, it could also be of interest to clearly specify computational ethics. Lastly, it is a first step toward a clear formalisation of human ethical rules.

REFERENCES

Anthony Aaby, Computational Ethics, technical report, 2005

Issac Asimov, I, Robot, Spectra, New York, NY. (2004)

Chitta Baral, Knowledge Representation, Reasoning and Declarative Problem Solving, Cambridge University Press, (2003)

Luciano Floridi, Jeff Sanders, On the Morality of Artificial Agents, Minds and Machines, 2004, 14.3, pp. 349-379

John McCarthy, Circumscription: a form of non-monotonic reasoning. In: Artificial Intelligence, number 13 (1980) 27-39, 171-172.

Drew McDermott, Jon Doyle, Non-monotonic logic 1. In: Artificial Intelligence, number 13 (1980); pp. 41-72.

Raymond Reiter, A logic for default reasoning. In: Artificial Intelligence, number 13 (1980) pp. 81-132.