Harnessing Computer Ethics In Establishing Information Security

AUTHOR
Zuraini Ismail, Maslin Masrom and Fiza Abdul Rahim

ABSTRACT

Introduction

The expansion and evolution of computer usage has proven to offer significant business opportunities to organizations and it provides us with many capabilities and these in turn give us new choices for action. The business impact of its misuse due to unethical behaviour related should not be underestimated.

The widespread use of IT raises its own ethical problems. Issues such as privacy protection, information violations, misuse of peer file sharing and accessing inappropriate websites indicates what is at stake. Users are unaware of the damage that will occur as a result of their action when using computers unethically. Similarly, privacy for example, is now recognized by many ethicists as requiring more attention than previously received in moral theory because the use of IT (Brey, 2000). These issues give rise to a new field of ethics called IT Ethics or Computer Ethics, which may have a similar status as other fields of applied ethics, such as medical ethics and business ethics.

According to the Gartner Group Report of 2008, 75% of IT security incident are caused from within the company by insiders and not hackers. Additional statistics analyzed by Malaysia KPMG Fraud Survey Report (2004) also indicate that 87% of fraud is perpetuated internally i.e. 18% by management employees while 69% is by other employees. Computer crimes, such as embezzlement or planting of logic bombs, are normally committed by trusted personnel who have permission to use the computer system. Computer security, therefore, entails organizations concerned with the actions of trusted computer users.

In this paper we investigate and further examine which components of the computer ethics that positively influence the information security of ICT users in the manufacturing and services sectors. In addressing this issue, this paper is organized into six sections. This section introduces the importance of computer ethics in harnessing information security. Section two presents the aspects of Computer Ethics and Information Security. Section three leads to the development of the conceptual framework. Section four presents the evaluation and section five draws the conclusion of study and further suggests future work.

Computer Ethics and Information Security

Computer ethics has long been involved in analyzing the computer’s role in our ethical and belief systems, as well as, monitoring the rapidly changing landscape of computing technology (Sullins, 2005). It is about people and their relations with a focus on right and wrong, with additional issues enabled by IT involving the misuse of information. In composite, ethical behaviors, perceptions, and practices frequently are viewed as the organization’s ethical work climate (Victor & Cullen, 1987), which can influence individual ethical decision-making (Wyld & Jones, 1997). Computer ethics aims to help formulate guidelines to direct action in the development, management and use of IT. Bynum (1998) believes further that computer ethics is rapidly evolving into global information ethics, that computer ethics is driven by the World Wide Web (WWW) and that computer ethics includes topics such as global laws, global cyber-business and global education.

Information security may directs and supports the organization and affiliated organizations in the protection of their information assets from intentional and unintentional disclosure, modification, destruction or denial through the implementation of appropriate information security and business resumption planning policies, procedures and guidelines (Peltier, 2005). There are three (3) main components in information security, which are Confidentiality, Integrity and Availability. Confidentiality is necessary to conceal important information that is saved or transmitted in online and offline environments from an unauthorized or unidentified party. Integrity is required to protect information content transmitted via the network from being illegally created, modified, or deleted. Availability refers to the availability of information resources, it may be much worse, depending on how reliant the organization has become on a functioning computer and communications infrastructure (Jong, 2007). The combination of these three main components may provide a better secure environment for information.

Security breach is where a stated organizational policy or legal requirement regarding information security has been contravened. A study was conducted by Symantec Corp in their Internet Security Threat Report, Malaysia is ranked 8 out of 10 top-infected countries in Asia-Pacific region as a target for cyber attackers (Sani, 2006). It shows that information assets and infrastructure become more vulnerable to cyber attacks.

Hence, it may be more important to address the issue of computer use and security as an attitude rather than a technology (Masrom and Ismail, 2008). The technology may vary between companies and vendors, but the attitudinal parameters can remain constant (Oblinger, 2003). If individuals, through awareness and knowledge, develop an ethical, moral attitude toward computer use and security, the transitions into the future will be much smoother. Computer use and security depends on shared responsibility for the ethics and integrity at the work place in securing the organization’s information.

Conceptual Framework

The proposed framework comprises of The Code of Ethics, Ethics Awareness and The Law that makes up the antecedents that influence the information security. This study further establishes the relationship between computer ethics and information security.

REFERENCES

Brey, P. (2000). Method in Computer Ethics:Toward a Multi-Level Interdisciplinary Approach. Ethics and Information Technology, 2(2):pp. 125-129.

Bynum, T.W. and Moor, J.H. eds. (1998). The Digital Phoenix: How Computers are Changing Philosophy, Oxford:Blackwell.

Jong, W.S. (2007). Information Security Component Framework and Interfaces for Implementation of SSL, IJCSNS International Journal of Computer Science and Network Security, 7(10).

Malaysia KPMG Fraud Survey Report. (2004). Nature of malware changes in 2001/2002.

Masrom, M and Ismail, Z. (2008). Computer Security and Computer Ethics Awareness: A Component of Management Information Systems. IEEE ITSim08. ISBN 978-1-4244-2327-9.

Oblinger, D. (2003). Computer and Network Security and Higher Education’s Core Values, EDUCAUSE Center for Applied Research, Research Bulletin, Vol. 2003, Issue 6, 1-11.

Peltier,R. (2005). Information Security Risk Analysis, CRC Press.

Sani.R. (2006). Cybercrime Gains Momentum. April3, New Straits Times.

Sullins, J. (2005). Ethics and artificial life: From modeling to moral agents, Ethics and Information Technology, 7:139-148.

Victor, B. and Cullen, J.B.(1987). Theory and Measure of Ethical Climate in Organizations. Research in Corporate Social Performance and Policy, 9:51-71.

Wyld, D.C. and Jones, C.A.(1997). Theimportance of context: The ethical work climate constructs and models of ethical decision-making –An agenda for research. Journal of Business Ethics, 16:465-472.

Yassin, Y. and Yunos, Z. (2006). Ethics in Info Security. NST Tech & U. November, 20:pp1-4.

Metaphors in Orbit: Revolution, Logical Malleability, Generativity and the Future of the Internet

AUTHOR
David Sanford Horner

ABSTRACT

The argument of this paper is that we in the Computer Ethics community have been perhaps held captive for too long by the rhetoric of revolutionary technological change. It may be worthwhile re-examining this canonical assumption that ethical concerns are necessarily about radical novelty especially given that the theme of this conference is ‘the backwards, forwards and sideways changes of ICT’. The revolutionary view is, of course, metaphorical and metaphors are notorious for their properties of bewitchment. Hannah Arendt reminds us that the original meaning of the metaphor of revolution was ‘return’, a backward revolving motion, suggesting the lawfulness of rotating, cyclic movement of astronomical bodies. The new metaphor denoting novelty, beginning and violence can be dated to the time of the French Revolution (Arendt, 1963, p.41). The danger of revolutionary rhetoric is that it suggests a kind of irresistibility, a quasi-Marxist view, in which the force of computing transforms society so we have variously ‘the cybernetic’, ‘the computer’, ‘the information’, ‘the virtual’ ‘the digital’, and so on, revolution. These revolutions are then meant to usher in their corresponding societies: the cybernetic, the computer, the information, the virtual, the digital etc. (Winner, 1986; Graham; 1999).

In Jim Moor’s now widely accepted standard account of Computer Ethics, as an independent field of theoretical and practical endeavours, the stress is precisely on the need to address the policy vacuums and conceptual muddles thrown up by the radical novelty of revolutionary advances in computing. Moor summarises the argument in this way: “The revolutionary feature of computers is their logical malleability. Logical malleability assures the enormous application of computer technology. This will (sic) bring about the computer revolution. During the Computer Revolution many of our human activities and social institutions will be transformed. These transformations will leave us with policy and conceptual vacuums about how to use computer technology. Such policy and conceptual vacuums are the marks of basic problems within computer ethics. Therefore computer ethics is a field of substantial practical importance.” (Moor, 1985 p.272) A further part of this standard account is that we must not only fill the policy vacuums retrospectively but attempt to anticipate the future direction of technological travel in order to produce a prospective ethical assessment of likely policy vacuums.

I wish to differentiate this revolutionary or ‘innovation-centric’ picture from, what seems to me, a broader and cogent picture, a ‘technology-in-use’ view. David Edgerton in The Shock of the Old: Technology and Global History since 1900, (2006) argues that this innovation-centric picture generally tends to ignore those technologies which are mature and currently in use – their histories and continuing significance. Why should we assume these no longer present ethical problems? Social values change; what once seemed uncontentious may now be contentious and vice versa: “Time was always jumbled up, in the pre-modern era, the post-modern era and the modern era. We worked with old and new things, with hammers and electric drills. In use-centred history technologies do not only appear, they also reappear, and mix and match across the centuries. Since the late 1960s many more bicycles were produced globally each year than cars. The guillotine made a gruesome return in the 1940s. Cable TV declined in the 1950s to reappear in the 1980s’ (Edgerton, 2006, xii). In its picture of revolutionary irresistibility, innovation-centric view tends to ignore the innovations that failed; it tends to ignore the technology that developed only slowly and not exponentially; it ignores the counterfactuals that is how, for example, the different ways in which information and communication technologies might have developed given different policies, regulatory regimes and social values. Old technologies re-emerge whilst we find emergent limits to new technologies.

I will aim in the paper to illustrate this argument by an analysis of Jonathan Zittrain’s recent account of the growth and possible future of the Internet, The Future of the Internet: And How to Stop It (2009). Zittrain’s approach has much in common with that of Jim Moor. He refers, for example, to ‘the modern information revolution’ and considers both the PC and the Internet as revolutionary. If for Jim Moor the revolutionary attribute of computing is ‘logical malleability’ then for Jonathan Zittrain the concept of ‘generativity’ is the revolutionary attribute of PCs and the Internet. Zittrain writes that: “Today the same qualities that led to their successes are causing the Internet and the PC to falter. As ubiquitous as Internet technologies are today, the pieces are in place for a wholesale shift away from the original chaotic design that has given rise to the modern information revolution. This counterrevolution would push mainstream users away from a generative Internet fosters innovation and disruption, to an applanicized network that incorporates some of the most powerful features of today’s Internet while greatly limiting its innovative capacity – and for better or worse, heightening its regulability.” (Zittrain, 2009, p 8)

What I will argue is that Zittrain’s account demonstrates that the innovation-centric approach with its emphasis on irresistibility and novelty misplaces the dynamic from the social to the technological. There was nothing pre-destined about the way the Internet emerged and continues to develop. It was the way in which certain developers and designers chose to instantiate logical malleability to produce generativity which is the key to its current characteristics. The analysis of the history of both the PC and the Internet with the emergence of generativity stems ultimately from the nature of social groups primarily involved in their development (academic researchers, hobbyists, etc.), the values they held and the choices that they made. What I propose is that Zittrain’s history shows that trust and openness were as important, if not more important, than any particular technical attributes. There is a clear link between design choices and the ethos of the Internet and this puts ideas of ‘the good’ at the centre of the discussion. At the same time the possibility of an applanicized network represents the re-emergence, if not of old technology, then of new technologies embedded in old business models and old methods of control.

REFERENCES

Arendt, H., 1963. On Revolution. London: Faber and Faber.

Edgerton, D., 2006. The shock of the old: technology and global history since 1900. London: Profile Books.

Graham, G., 1999. The Internet: a philosophical inquiry. London: Routledge.

Moor, J., 1985. What is computer ethics? Metaphilosophy, 16 (4) October, pp.266 – 275.

Winner, L., 1986. Myth information: romantic politics in the computer revolution. In: C. Mitcham and A. Hunning, eds. Philosophy and Technology II. Dordrecht: D.Reidel, pp.269-289.

Zittrain, J., 2009. The Future of the Internet: And how to stop it. London: Penguin.

A Deontological Two-Pronged Moral Justification for Legal Protection of Intellectual Property

AUTHOR
K.E. Himma

ABSTRACT

Whether or not intellectual property rights ought, as a matter of political morality, to be protected by the law, I argue, depends on what kinds of interests the various parties have in intellectual content. Although theorists disagree on the limits of morally legitimate lawmaking authority, this much seems obvious: the coercive power of the law should be employed only to protect interests that rise to a certain level of moral importance. We have such a significant interest in not being lied to, for example, that ordinary unilateral lies are morally wrong, but the wrongness of lying does not rise to the level of something the state should protect against by coercive criminal prohibition.

I begin this essay by distinguishing two ethical issues regarding IP not usually distinguished in the literature. The first is whether authors have a morally significant interest (i.e., one that receives some protection from morality) in controlling the disposition of the contents of their creations, which would include some (possibly limited) authority to exclude others from appropriating those contents subject to payment of an agreed-upon fee; this interest might, or might not, rise to the level of a moral right. The second is whether it is morally permissible, as a matter of political morality, for the state to use its coercive power to protect any such interests authors might have in the contents of their creations. Such protection might, or might not, constitute a legal right, as there are other legal mechanisms for protecting peoples’ interests.

These are logically distinct issues. The first concerns moral standards that apply to the acts of individuals, while the second concerns moral standards that apply to the acts of the state. Not every morally protected interest an individual has is legitimately protected by the state. For example, I have a morally protected interest in not being told lies, but it would not be legitimate for the state to create a criminal or civil cause of action that makes a person liable for every lie she tells. Conversely, not every morally legitimate law protects some interest antecedently protected by morality. Apart from the existence of a law requiring people to drive, say, on the left-hand side of the road, no one has a morally protected expectation that people drive on the left-hand side of the road. Such an interest arises only after the enactment of a law requiring as much – and it arises because that law has been enacted. What individuals morally ought to do and what the law morally ought to do are issues that fall into two different areas of normative ethical theorizing because the law regulates behavior by coercively restricting freedom and hence impinges our moral right to autonomy.

Of course, the two issues are sometimes connected. Surely, part of what justifies the state in coercively criminalizing murder is the moral quality of murder: it is one of the worst moral wrongs, if not the worst (I am not sure, for example, whether torture is worse), one can commit because it violates one of the most important moral rights – the moral right to life. It would be morally problematic to criminalize a behavior and punish it with incarceration or death unless it involves a pretty grievous moral transgression.

I argue that it is also reasonable to think that whether legal protection of intellectual property is justified as a matter of political morality turns, at least in part, on the moral importance of the interests of the various concerned persons in intellectual content. If content-creators have no morally significant interest in the content they create and other persons have an urgent need for unrestricted access to content, then it seems reasonable to think that it would be wrong for the state to enact restrictions on access to content of a sort that constitutes protection of intellectual property.

In this essay, I next address the substantive issue of whether the state may legitimately recognize and protect IP rights (which, again, need not mirror the content of existing IP law in the western world) because this is, as far as I can tell, the issue about which theorists and laypersons are most concerned. In doing so, I assess the weight of the interests that content-creators have in their creations against the interests of third parties, and attempt to assess the relative importance of each. In the process, I defend this methodology on both intuitive and theoretical grounds, giving famous examples of influential philosophical theories that more or less explicitly justify substantive moral claims on the strength of the interest-balancing methodology I articulate here. Additionally, I explicitly address both the issue of individual morality and the issue of political morality and take care to ensure that the reader is aware at all times which issue is being addressed.

On the basis of this methodology, I give a detailed assessment of all the relevant interests, specifying whether they fall under the category of needed for survival, needed for human flourishing, or merely wanted for amusement. I argue that the interests content-creators have in the content they create (or discover) (1) outweigh the interests of other persons in all cases not involving content necessary for human beings to survive, thrive or flourish in morally significant ways, and (2) are sufficiently important that they deserve some legal protection. I also argue that (3) ordinary considerations of justice support the idea that content-creators have a morally protected interest in the value they introduce into the world through their intellectual creations. While (1), (2), and (3) do not obviously imply the existence of moral rights to intellectual property, they surely present a prima-facie justification for using the coercive power of the law to protect the interests of content-creators in the contents of their creations. And one eminently sensible way of protecting their interests is for the law to allow them limited control over the disposition of their creations. How much control they should be allowed is a further issue I do not address here.

Alienation and ICT: How Useful is the Classical Concept of Alienation in Analyzing Problems of ICT

AUTHOR
Mike Healy and N. Ben Fairweather

ABSTRACT

This paper examines the value of using the concept of alienation in studying the ethical and societal implications of information communications technology (ICT). Recent contributions include topics such as work alienation among women IT workers (Adya 2008), business investment decisions (Abdulla and Kozar,2007), urban alienation (Foth, 2005), international ecommerce (Sinkovics et al 2007), the impact of technology job structure and redundancy (Vickers and Parris, 2007), education (Akudolu, 2006, Moule 2003, Rovai and Wighting, 2005), the alleviation of poverty (Slater and Tacchi 2004) and business ethics (Smith et al 2004). However, in much of the literature, alienation is not fully described and seems to serve as shorthand for some vague form of undefined dissatisfaction. This paper seeks to address this weakness by reviewing a number of key texts associated with alienation including Marx, Seeman and Mann. (Marx 1970; Seeman 1959, 1983; Mann 2001)

The classic Marxist theory of alienation, an attempt to define and reveal man’s relationship to the wider social order, is outlined and covers four distinct expressions of alienation: estrangement from the products of our labour; alienation from ourselves; alienation from our species being; and alienation from others. The work of Seeman with its focus on powerlessness, meaninglessness, anomie, isolation, and self-estrangement is discussed. More recently Mann has sought to use the theory of alienation, to developed an explanation for and a possible solution to the lack of active engagement by learners in higher education, Her work is also covered here and in particular Mann’s description of possible conditions that create states of alienation: the influence of external forces, notably the drive for utility in learning; the existence of previously determined entrenched roles; the denial/repression of student creativity; and the loss of ownership over the learning process.

After considering the theoretical underpinnings of alienation, the paper develops the argument by examining how the concept can be applied to the use of ICT in a range of scenarios such as ICT training, ICT and education, and ICT in work and ethics. This section of the paper proceeds with a discussion of the work of Phelps et al (2005) whose research based on complexity theory emphasizes the need to foreground the requirements of end-users allowing them to feel they have the power to determine the pace, direction, purpose and product of their learning activity. Such an approach, argues Phelps et al, will facilitate and encourage self-directed learning that is much needed in an ICT based society. The discussion also embraces an interesting study of the use of web blogs in a web-based distance learning environment, which sought to examine the impact of blogs on student feelings of isolation, alienation and frustration (Dickey 2004). Reference is also made to a recurring theme in much of the literature which is concerned with the non-take up of ICT and relates to the need for end-users to be more intimately involved with initial planning and implementation of ICT systems to alleviate feelings of alienation. The example provided within the paper concerns the use of ICT in urban planning and lack of involvement of planners in the development of large-scale ICT projects directed to the creation of so-called digital cities. (Aurigu 2006)

Alienation and work also forms part of the discussion since it appears as a prominent theme in the literature. Kohn (1976) examined the relationship between occupational structure and alienation; a theme that has been echoed more recently by DeHart-Davis and Pandey (2005) in their research into rules, regulations and procedures and public employees. Ferguson and Lavalette (2004) have also used theories of alienation to argue for a refocusing of what they term “emancipatory social work” (page 297). Banai and Reisek (2007) have employed concepts of alienation to look at supportive leadership and job characteristics and DiPietro and Pizam (2008) examined causes of alienation amongst American fast food workers. The paper refers to recent research on the use ICT in the financial services and discusses alienation issues arising from the impact of technology in the current economic crisis. Work of this nature provides a rich source of information and perspectives when considering the use of ICT at work. (Wolf 2007; Jennings 2009; Healy and Fairweather 2009)

This section of the paper concludes by looking at the relationship between ICT, ethics and alienation. Here the paper notes that while in other fields of study such as medicine, management theory, education and consumer research, there is a body of knowledge that seeks to combine ethical concepts and theories of alienation, this has been notably missing within the field of ICT ethics. There are exceptions such as research focused on showing how the transformation of personal data by information systems and which is subsequently re-presented to an external audience, can be described as a process that creates alienation Floridi (1999). However, the paper concludes that the work such as that of Floridi is very much the exception. A further conclusion is that the concept of alienation, rather than being merely a shorthand term for general dissatisfaction, offers a robust tool for the examination of how non-technical factors adversely impact on the use of ICT.

Social Networking and the Perception of Privacy Within the Millennial Generation

AUTHOR
Andra Gumbus,Frances S. Grodzinsky and Stephen Lilley

ABSTRACT

Introduction

Has technology caused a generational divide between current college age users who have no problems posting intimate details of their personal life on the Web and more traditional older users? When Scott McNealy, chief executive officer of Sun Microsystems, pronounced that “You have zero privacy anyway. Get over it.” (Sprenger, 1999) he was speaking to middle-aged journalists. Supposedly there is no need to tell this to the younger generation. Many adults are shocked by what they see on Facebook and Myspace and believe that most teenagers don’t take the risks seriously. In an article written for the New York Times “When Information Becomes T.M.I.”, Warren St. John writes, “Through MySpace, personal blogs, YouTube and the like, this generation has seemed to view the notion of personal privacy as a quaint anachronism. Details that those of less enlightened generations might have viewed as embarrassing — who you slept with last night, how many drinks you had before getting sick in your friend’s car, the petty reason you had dropped a friend or been fired from a job — are instead signature elements of one’s personal brand. To reveal, it has seemed is to be” (St. John, 2006).

Gross and Acquisti (2005) found that users may be at risk both online and offline when using social networking sites. Most users, irrespective of age, have little knowledge and don’t understand where their information goes nor consider the consequences of clicking without thinking. They make the mistake of thinking that privacy settings guarantee confidentiality of information posted. But when applications are used, they are downloaded allowing the developer access to the downloader’s information. Issues of ownership of data surfaced in February 2009 when Facebook asserted rights over members’ content even after an account was terminated. (Associated press, 2009). Although Facebook backed down in the face of online protests and bad publicity, the episode demonstrated its potential reach.

Research Questions and Methodologies

Given the important issues at stake, the authors decided to conduct an exploratory study on this generation’s view on privacy and their use of social networking. Is it true that young men and women don’t care about privacy? Do they have little regard for controlling personal information? Are frequent users of social networking sites naïve and acquiescent? To address such questions regarding attitudes toward privacy and knowledge about security of data on social networking sites, we conducted a survey of 251 college students and follow-up focus groups with 16 of those students. We compare younger and older respondents in the Millennial generation and light and heavy users of social networking sites on their survey responses to privacy issues. Focus group participants, aged 18-22, were asked both written and open ended verbal questions regarding their use of social networking sites. Written questions focused on frequency and duration of sites visited, number of various categories of friends, and features of Facebook, such as virtual gift, news feed, social ads, status update, wall, Facebook Connect, random 25, and beacon. To assess the respondents’ awareness about control and ownership of content, we questioned them about Facebook’s terms of service and business practices. The legitimate and illegitimate use of social networking in both work and university contexts were explored as well as harm that can be caused by overuse.

Facebook Nation: Our Findings

From our survey, we found that a majority of students frequently access social networking sites, although non-traditional age students (over 21 ) are much less likely to do so. Heavy users as compared to light users were significantly more likely to take a cavalier attitude about computer use. However, heavy and light users were not significantly different on their attitudes to privacy and internet monitoring. The majority favored privacy protections especially when it comes to sites and communication mediums (e.g., email) deemed part of their personal realm.

Focus groups revealed hours per day on Facebook vary between 1 and 5. Less use of MySpace, LinkedIn, AIM and blogs were reported. All who use Facebook expressed fears regarding invasion of privacy, however, many were ignorant of the specific ways that their content and activities on Facebook could be exploited. They want control over who sees their site and what is on it and all were worried about employer searches. A few seniors (final year students) described steps to “clean up” their sites, as well as, dramatically reduce the number of friends as they approach graduation.

In conclusion, the respondents in this study value privacy and do recognize, in general, the risks of using social networking sites. Some practice caution and avoid the sites or limit their exposure. Still, we find that those who have become dependent on the sites appear to be more passive. One respondent stated that “Facebook is a great network to get in contact with friends… [but] many of its features invade people’s privacy.” Some, but not all, are willing to accept this trade off.

REFERENCES

Associated Press, “Facebook Tries Its Hand at Democracy” February 27, 2009 Accessed 3/2/2009 http://www.msnbc.msn.com/id/29414725/from/ET/

Gates, Anita “For Baby Boomers, the Joys of Facebook” New York Times, March 22, 2009 CT7

Gross, Ralph and Acquisti, Allesandro “Information Revelation and Privacy in Online Social Networks” in Proceedings of the 2005 Workshop on privacy in the Electronic Society ( WPES), ACM, 71 – 80, 2005.

Sprenger, P. (1999) “Sun on Privacy: ‘Get Over It”, Wired. http://www.wired.com/politics/law/news/1999/01/17538, Accessed July 2, 2009.

St. John, Warren (2006) “When Information is T.M.I.” New York Times, September 10,2006,

http://www.nytimes.com/2006/09/10/fashion/10FACE.html?_r=1&scp=1&sq=%22When%20 Information%20Becomes%20T.M.I.%22%20%20sept%202006&st=cse/ Accessed February 21, 2009.

Toward a Model of Trust and E-Trust Processes Using Object-Oriented Methodologies

AUTHOR
FS Grodzinsky, K Miller and MJ Wolf

ABSTRACT

Introduction:

This paper explores different phenomena that have been discussed in the literature as “trust,” “e-trust” and “reliance”. The ambiguous nomenclature confuses the discussion, and is detrimental to a dialogue about important issues involving these terms. We claim the modest goal of devising a model that will help us to describe more precisely and analyze more carefully scenarios involving trust. We will isolate two aspects of trust or, as we prefer, processes related to trust, to come up with a working definition of trust and then present an object-oriented model that delineates trust and e-trust in a superclass / subclass model.

Two Aspects of Trust:

To focus our exploration of trust processes, we first isolate two aspects of any such process: a means of communication and the identity of the agents involved.

1. Means of communication: We will focus on two general modes of communication involved in trust processes. We do not claim that these two are the only possible forms of communication, or that our classification is the only way to characterize these kinds of communication. But these two “modes” are convenient for our discussion of trust and e-trust.

We define the transfer of information between A and B as a “communication.” That is, we will be careful not to argue about the effect of the arrival of information on an agent. Such arguments might be important, but we will simplify our discussion by not debating about how humans and artificial agents “communicate” beyond noting the means of conveying information between A and B.

Communication mode #1: Communications that use telecommunications and computing as mediating. These would include at least telephone communications, email, instant messaging, Skype communications, blogging, electronic bulletin boards and so forth. We will call these Ecommunication.

Communication mode #2: Communications that require physical proximity, including talking, touching, and sign language. We will collect all these kinds of communication under the label Pcommunication.

2. Human or Artificial Agents? The two entities A and B could be human or artificial. We will call silicon-based, computer controlled interactive entities AA’s for “artificial agents.” For the first part of our discussion we will limit ourselves to single agents A and B. Later we will generalize to allow either A or B to be a group of entities.

A Working Definition of Trust:

Taddeo (2009) analyzes different definitions of trust and e-trust and discusses definitional problems that remain. Despite the remaining questions that Taddeo identifies, we require at least an outline of trust and e-trust to advise software developers involved in AA projects about issues of trust. Following Taddeo’s analysis, we will assert the following principles about trust and e-trust. First, about trust:

  • Trust is a relation between A (the trustor) and B (the trustee). A and B can be human or artificial.
  • Trust is a decision by A to delegate to B some aspect of importance to A in achieving a goal. We assume that an artificial agent A can include “decisions” (implemented by, for example, IF/THEN/ELSE statements). These decisions involve some computation about the probability that B will behave as expected.
  • Trust involves risk; the less information A has about B, the higher the risk and the more trust is required. This is true for both artificial and human agents. In AAs, we expect that risk and trust are quantified or at least categorized explicitly; in humans, we do not expect that this proportionality is measured with mathematical precision.
  • A has the expectation of gain by trusting B. In AAs, “expectation of gain” may refer to the expectation of the AA’s designer, or it may refer to an explicit expression in the source code that identifies this expected gain, or both.
  • B may or may not be aware that A trusts B. If B is human, circumstances may have prevented B from knowing that A trusts B. The same is true if B is an AA, but there is also some possibility that an AA trustee B may not even be capable of “knowing” in the traditional human sense.
  • Positive outcomes when A trusts B encourage A to continue trusting B. If A is an AA, this cycle of trust – good outcome – more trust could be explicit in the design and implementation of the AA, or it could be implicit in data relationships, as in a neural net.

Second, about e-trust:

  • E-trust occurs through Ecommunication, where physical contact is not required between A and B, and where there may or may not be social norms.
  • “Trust needs touch” is not a requirement.
  • Referential trust (based on recommendations) is often important in e-trust.

The Model: Nine Classes for Trust and E-trust

In the remainder of the paper we define a naming convention for several distinct types of trust based on the two aspects previously introduced. The naming convention is based on the superclass/subclass idea common in object oriented programming. We overlay the working definitions above onto the model and then propose that a particular, contextualized instance of trust can be described. The paper will discuss the importance of the social-technical context and the instantiation of the superclass/subclasses. It will conclude with a demonstration of how this model can be used to improve discussions of e-trust.

REFERENCES

Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5, 2, April-June 2009.