Why Computers Will (Necessarily) Deceive Us and Each Other

AUTHOR
Cristiano Castelfranchi

ABSTRACT

The main claim of this paper is that in H-C interaction, computer supported cooperation and organisation, computer mediated commerce, intelligent data bases, teams of robots. etc. there will be purposively deceiving computers. In particular, within the Agent-based paradigm -currently dominating computer science, and specially AI – we will have “intelligent deceiving agents”.

Not only we will have malicious agents with malicious motives or working for malicious owners, but agents deceiving us or others for good reasons or in our interest. For example:

  • Information systems will have to misinform an unauthorized user (be it a human or a software agent) in order to protect confidential information. This is a well-known problem in the field of databases where the concept of “multi-level security”, requiring deliberately wrong answers and cover stories, is well established (Wagner, 1997)
  • In electronic commerce we will have agents that do our bidding for us in a in a self-interested perspective (Ephrati and Rosenschein, 1991). When our agent goes to buy something in some online auction, we don’t want it to honestly bid the expected value of the good if it could potentially get that good for less money. This is necessary in any usual bargaining, that must be deceptive (Vulkan, 1998).

(Of course, there will be also fraudulent and malicious agents for damaging the competitors or for stealing information or money.)

Moreover, Electronic Commerce is being developed mainly in the sellers’ perspective and in their advantage. The consumers’ interests are less considered, while – on the contrary – the Agents could be of help for empowering consumers and reduce the handicap due to asymmetric information in market.

  • Also our personal assistant should probably deceive us trying to influence us to do the right thing, to protect our interest against our short term preferences or rational biases. In the same vein, a medical doctor is reticent on drug side-effects to avoid the patient’s discouragement (De Rosis et al. ), or a message for risk prevention doesn’t stress the fallibility of the remedy (for ex. of a contraceptive method). In the near future, a lot of these reccomendations will be given by software agents. Will they have the same paternalistic (and deceiving) attitude?

I will deal with the following issues: How and why will artificial agents try to deceive? I will provide some ontology about deception, lie, and secret; I will discuss some reasons and strategies for directly or indirectly deceiving and lying. Is there any safeguard for us and our agents? Which are the strategies for suspect, discover and defence from cheaters? How to design appropriate social mechanisms and/or protocols to eliminate or reduce the incentive to lie, cheat, and steal in artificial societies? Very crucial in all this is the importance of trust (in computers, in our agent, in other agents, in the infrastructure, in possible third parties and authorities) and of reputation. Should/will we have real social trust relations with these machines? Is this psychologically real and morally acceptable?

Could Agents be designed to empower consumers? Not only we are in a new AI paradigm (Agents) but there is a strong trend towards modelling emotions and personalities in them (“believable agents”), and in implementing a lot of “sociality”, affects, adaptivity and reactivity in HCI (Affective computing; Japanese approach: “kansei”). On the other side there are attempts to make those agent sensible to and reasoning about norms, commitments, permissions, etc. Will this lead us to “responsible” artificial agents? Able to feel guilty and to have moral feelings? Or at least to agent responsible because they are aware of norms and obligations and able to deal with them?

References

Castelfranchi, C. (1998) Modelling Social Action for AI Agents. Artificial Intelligence, 1998, 6.

Castelfranchi, C. and Conte R. (1998) Limits of Economic Rationality for Agents and MAS. Robotics and Autonomous Systems, Special issue on Multi-Agent Rationality, Elsevier. 1998, 3.

Castelfranchi C., de Rosis F., Falcone R., Social Attitudes and Personalities in Agents, Socially Intelligent Agents, AAAI Fall Symposium Series 1997, MIT in Cambridge, Massachusetts, November 8-10, 1997.

Castelfranchi C and Falcone.R. Princnples of Trust for Multi-Agent Systems: Cognitive Anatomy, Social Importance, and Quantification. International Conferences on MAS – ICMAS’98, Paris, 2-8 July 98, AAAI-MIT Press.

Castelfranchi, C. and Tan Y.H. (eds.) “Trust, Deception and Fraud in Artificial Societies” Kluwer, in press

Conte, R. e Castelfranchi, C. (1995) Cognitive and Social Action. London, UCL Press.

Conte, R. e Castelfranchi, C. (1998) From conventions to prescriptions. Towards a unified theory of norms. AI&Law, 1998.3.

de Rosis, F. , Grasso F. and Berry, D. Refining medical explanation generation after evaluation. Artificial Intelligence in Medicine, in press.

Ephrati E. and Rosenschein J., “The Clarke Tax as a Consensus Mechanism Among Automated Agents”, in Proc. of AAAI-91 ,pages 173-178.

Wagner, G. “Multi-Level Security in Multiagent Systems”, in P. Kandzia and M. Klusch (Eds), Cooperative Information Agents, Springer LNAI 1202, 1997, pp. 272-285)