Toward a Model of Trust and E-Trust Processes Using Object-Oriented Methodologies

AUTHOR
FS Grodzinsky, K Miller and MJ Wolf

ABSTRACT

Introduction:

This paper explores different phenomena that have been discussed in the literature as “trust,” “e-trust” and “reliance”. The ambiguous nomenclature confuses the discussion, and is detrimental to a dialogue about important issues involving these terms. We claim the modest goal of devising a model that will help us to describe more precisely and analyze more carefully scenarios involving trust. We will isolate two aspects of trust or, as we prefer, processes related to trust, to come up with a working definition of trust and then present an object-oriented model that delineates trust and e-trust in a superclass / subclass model.

Two Aspects of Trust:

To focus our exploration of trust processes, we first isolate two aspects of any such process: a means of communication and the identity of the agents involved.

1. Means of communication: We will focus on two general modes of communication involved in trust processes. We do not claim that these two are the only possible forms of communication, or that our classification is the only way to characterize these kinds of communication. But these two “modes” are convenient for our discussion of trust and e-trust.

We define the transfer of information between A and B as a “communication.” That is, we will be careful not to argue about the effect of the arrival of information on an agent. Such arguments might be important, but we will simplify our discussion by not debating about how humans and artificial agents “communicate” beyond noting the means of conveying information between A and B.

Communication mode #1: Communications that use telecommunications and computing as mediating. These would include at least telephone communications, email, instant messaging, Skype communications, blogging, electronic bulletin boards and so forth. We will call these Ecommunication.

Communication mode #2: Communications that require physical proximity, including talking, touching, and sign language. We will collect all these kinds of communication under the label Pcommunication.

2. Human or Artificial Agents? The two entities A and B could be human or artificial. We will call silicon-based, computer controlled interactive entities AA’s for “artificial agents.” For the first part of our discussion we will limit ourselves to single agents A and B. Later we will generalize to allow either A or B to be a group of entities.

A Working Definition of Trust:

Taddeo (2009) analyzes different definitions of trust and e-trust and discusses definitional problems that remain. Despite the remaining questions that Taddeo identifies, we require at least an outline of trust and e-trust to advise software developers involved in AA projects about issues of trust. Following Taddeo’s analysis, we will assert the following principles about trust and e-trust. First, about trust:

  • Trust is a relation between A (the trustor) and B (the trustee). A and B can be human or artificial.
  • Trust is a decision by A to delegate to B some aspect of importance to A in achieving a goal. We assume that an artificial agent A can include “decisions” (implemented by, for example, IF/THEN/ELSE statements). These decisions involve some computation about the probability that B will behave as expected.
  • Trust involves risk; the less information A has about B, the higher the risk and the more trust is required. This is true for both artificial and human agents. In AAs, we expect that risk and trust are quantified or at least categorized explicitly; in humans, we do not expect that this proportionality is measured with mathematical precision.
  • A has the expectation of gain by trusting B. In AAs, “expectation of gain” may refer to the expectation of the AA’s designer, or it may refer to an explicit expression in the source code that identifies this expected gain, or both.
  • B may or may not be aware that A trusts B. If B is human, circumstances may have prevented B from knowing that A trusts B. The same is true if B is an AA, but there is also some possibility that an AA trustee B may not even be capable of “knowing” in the traditional human sense.
  • Positive outcomes when A trusts B encourage A to continue trusting B. If A is an AA, this cycle of trust – good outcome – more trust could be explicit in the design and implementation of the AA, or it could be implicit in data relationships, as in a neural net.

Second, about e-trust:

  • E-trust occurs through Ecommunication, where physical contact is not required between A and B, and where there may or may not be social norms.
  • “Trust needs touch” is not a requirement.
  • Referential trust (based on recommendations) is often important in e-trust.

The Model: Nine Classes for Trust and E-trust

In the remainder of the paper we define a naming convention for several distinct types of trust based on the two aspects previously introduced. The naming convention is based on the superclass/subclass idea common in object oriented programming. We overlay the working definitions above onto the model and then propose that a particular, contextualized instance of trust can be described. The paper will discuss the importance of the social-technical context and the instantiation of the superclass/subclasses. It will conclude with a demonstration of how this model can be used to improve discussions of e-trust.

REFERENCES

Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5, 2, April-June 2009.