Computer Ethics has, as one of its aspirations, that of reducing the probability of the unforeseen and undesirable effects of computer technologies (Rogerson, 2002). The social control of information and communication technology, it is argued, depends on our ability precisely to foresee such undesirable and often unintended effects. We should strive, therefore, to sharpen our forecasting tools. In a recent paper I argued a highly sceptical case that such efforts to accurately predict the future consequences of advances in computer technologies were largely futile (Horner 2003). This claim proved controversial and was greeted itself with a high degree of scepticism. What seems to me a truism seems to others a heresy. The proponents of ‘futurism’ are reluctant to abandon a commitment to anticipating ‘the shape of things to come’ as a base for policy formation and the social control of technology. Intuitively it seems perverse perhaps to deny knowledge of the future given that we seem to operate with such knowledge with our every planned action. In this paper I wish to address some of the arguments that seem to sustain a belief in the power and usefulness of prediction in the context of recent anxieties about ‘the coming era of nanotechnology’.
The urgency of such issues is underscored by, for example, by a new round of prediction associated with advances and anticipated advances in nanotechnology (Margolis, 2001, pp. 117 – 118). We are already being asked to consider the social and economic implications of such a ‘nanotechnological future’ created by computing at the at the quantum level. Nanotechnology promises to create one of those policy vacuums that Computer Ethics was created to address. Jim Moor and John Weckert (2003) have already alerted us to the possibilities of the amplification of traditional ethical problems with the vastly extended scale of data capture this new technology may provide. In addition completely new ethical issues may arise from predicted extensions to human longevity. Even more alarming are the scenarios of ‘nanobots’ out of control (the so called ‘grey goo’ phenomenon). Should we then create a ‘nanoethics’ based on such predictions?
My reason for scepticism concerning such a predictive enterprise is that decisions about the future are, as Collingridge (1987) maintains, ‘decisions under ignorance’. We just don’t know what’s going to happen so whatever decisions we make can’t be made on the basis of what we know about the future. Indeed the empirical evidence suggests that most predictions about the future turn out to be woefully inaccurate (Dublin 1990; Margolis, 2001). Our ability to know the future, particularly when dealing with pervasive technologies that are embedded in society in complex ways, vanishes practically to zero. There are a number of important effects that limit our knowledge of the future. Firstly, ‘information effects’ – the effect of limited information and the impossibility of assembling ‘complete information’. Secondly, Oedipus effects – the effect of making a predictive statement about circumstances to which that statement refers (Popper 1994). Thirdly, ‘revenge effects’ the familiar phenomenon of technologies often producing the opposite effects to those that were originally intended (Tenner,1997).
Critics have rejected such radical scepticism on a number of grounds. Firstly, for example, in the case of Y2K could we not justly say that the very nature of programming problem gave us a sound predictive base for anticipating the potential outcomes and indeed the remedy? A combination of both causal and logical necessity in this case meant that we could with confidence identify the need to install new versions or at least fix the old versions of software. Secondly, isn’t it simply a logical fallacy to move form the proposition that ‘we can’t know everything’ to the claim that ‘therefore we can know nothing’. Doesn’t our (scientific) knowledge of the past and present provide sufficient indications of the (probable) course of future events? And finally to accept the full force of the sceptic’s argument is a counsel of despair. At best it will proscribe ambitious and potentially beneficial technological developments (such as nanotechnology) and at worst will result in paralysis and a failure to address those policy vacuums that Computer Ethics is meant to address. The paper will evaluate these critical responses to the sceptic’s argument but seek to show how we proceed ethically in ignorance of the future. And in order to do this we must distinguish between three different questions: Could it happen? Should it happen? Will it happen? (Twiss, 1992, p.25).
COLLINGRIDGE, D., (1987) Criticism – its philosophical structure, Lanham, MD: University Press of America.
DUBLIN, M., (1990) Futurehype: the tyranny of prophecy. Ontario: Penguin,
HORNER, D.S., (2003) The error of futurism: prediction and computer ethics. In: F.S. Grodzinsky, R.A. Spinello and H.T. Tavani, eds. Proceedings for CEPE 2003 and Sixth Annual Ethics and Technology Conferences. Boston College, Chestnut Hill, MA, June 25 – 27, 2003. Boston: Boston College, pp. 66 – 76.
MARGOLIS, J., (2001), A brief history of tomorrow: the future, past and present. London: Bloomsbury.
MOOR, J. and WECKERT, J., (2003) Nanoethics. Unpublished paper presented at CEPE 2003, Boston College, Chestnut Hill, MA, June 25 – 27, 2003.
ROGERSON, S., (2002) Computers and society. In: R.E. Spier, ed. Science and technology ethics. London: Routledge, pp. 159 – 179.
POPPER, K., (1994) The poverty of historicism, Routledge, London.
TENNER, E., (1997) Why things bite back: predicting the problems of progress, Fourth Estate, London.
TWISS, B.C., (1992) Forecasting for technologists and engineers: a practical guide. London: Peregrinus.