Trust and Clinical Information Systems

AUTHOR
Den Pain, Rania Shibl, Kay Fielden and Andy Bissett

ABSTRACT

‘Trust’ may be a more useful concept with some types of computer systems than the narrower, more technical definitions such as ‘dependability’ or ‘reliability’ that software engineering tends to employ. Trust is a continuing theme within ETHICOMP, and its use (Raab, 1998) and misuse (de Laat, 2004) have been discussed. The term has the advantage that an implicit ethical dimension is captured. We found, when investigating clinical decision support systems (CDSS) in New Zealand, that the term was used unprompted by several different stakeholders when discussing their relationships with other stakeholders.

There is a plethora of trust definitions to choose from (Corritore et al., 2001). Nearly all definitions of trust share the condition that one party (the truster) must willingly place himself or herself in a position of vulnerability to or risk from another party (the trustee) (Gallivan, 2001). Karahannas and Jones (1999: 347) note that trust is ‘closely related to risk, since without vulnerability?there is no need for trust’.

CDSS incorporate a knowledge base, effectively a database of information. They also use patient specific data ranging from quite detailed to minimal. The CDSS employs these two components to provide the doctor with patient specific advice. Depending on the system, the degree of support given to the clinician will vary.

Trust in a CDSS at the highest level could be the general practitioner’s trust in the CDSS and in turn the patient’s trust of the GP and CDSS working together successfully. From an information systems perspective, however, it is clear that there are a number of other systems involved, each of which has to work effectively and be trusted by other parties for this top level of trust to be on a sure footing. In particular, consider a CDSS where the data is maintained by one company (often a publishing company) and is presented through application software developed by another, yet all of the associated hardware and software are managed by GP clinics themselves. In New Zealand the majority of GPs are organised in groups called Independent Practitioner Associations (IPAs). IPAs are now part of ‘Primary Health Organisations’ which provide the first line of health care to a particular area. In addition, IPAs also provide a range of support services including information technology support – and therefore CDSS operation – for their GP members.

Naturally there are other systems involved, such as the distribution of data updates from the original publishers through to the GP’s desktop. Thus a chain or network of trust exists. This concept of needing a network of trust has been recognised in the literature; see Muir (1994), for example.

Several factors underlying trust in automation have been identified including predictability, reliability and dependability. Rempel et al (1985) concluded that trust would progress in three stages over time from predictability to dependability to faith. Muir and Moray (1996) extended these factors and developed a trust model that contains six components: predictability, dependability, faith, competence, responsibility, and reliability. Muir (1987) states that trust is a critical factor in the design of computer systems as trust can impact the use or non-use of computers, and later findings by Vries et al (2003) confirm this tendency.

We found that different strategies were adopted to deal with the problems of information quality identified by a lack of trust in certain areas. Typically these were: creating a new role or job; providing a technical solution; providing an organizational solution, relying on contractual responsibility and using an agent to manage issues of trust.

We conclude by considering what lessons may be generalised, both in terms of the nature of trust and what this might mean within other computer systems and beyond, into the wider business world.

REFERENCES

Corritore, C., Kracher, B. and Weidenbeck, S. (2001) Trust in the online environment, in Smith, M., Salvendy, G., Harris, D., Koubek, R. (eds), Evaluation and interface design: cognitive engineering, intelligent agents, and virtual reality. NJ: Lawrence Erlbaum. 1548-1552.

e Laat, P. (2004) Open source software: a case of swift trust? in T. Ward Bynum, N. Pouloudi, S. Rogerson, T. Spyrou (eds), Proceedings ETHICOMP 2004, Vol. 1, University of the Aegean, Syros, Greece, 14th – 16th April 2004. ISBN 960-7475-26-7. 250-265.

Gallivan, M. (2001) Striking a balance between trust and control in a virtual organisation: a content analysis of open source software case studies, Information Systems Journal 11, 277-304.

Karahannas, M. and Jones, M. (1999) Inter-organisational systems and trust in strategic alliances, Proceedings of the International conference on Information Systems, Charlotte, NC, 346-357.

Muir, B. (1987) trust between humans and machines and the design of decision aides, International Journal of Man-Machine Studies, 27, 527-539.

Muir, B. M. (1994) Trust in automation, Part1: Theoretical issues in the study of trust and human intervention in automated systems, Ergonomics, 37(11), 1905-1922.

Muir, B. and Moray, N. (1996) Trust in automation: Part II Experimental studies of trust and human intervention in a process control simulation, Ergonomics, 37(11) 429-460.

Raab, C. (1998) Privacy and trust: information, government and ICT, in J. van den Hoven, S. Rogerson, T. Ward Bynum, D. Gotterbarn (eds), Proceedings ETHICOMP’98, Erasmus University Rotterdam, March 1998. 565-577.

Rempel, J. K., Holmes, J. G. and Zanna, M. P. (1985) Trust in close relationships, Journal of Personality and Social Psychology, 49, 95-112.

Vries, P., Midden, C., and Bouwhuis, D. (2003) The effect of errors on system trust, self confidence and the allocation of control in route planning, International Journal of Human-Computer Studies, 58, 719-735.