David Sanford Horner
In this paper I will present a range of ethical challenges which confront us in dealing with claims about the future impacts of radically new, potentially disruptive technologies. Jim Moor has stated that we need ‘better ethics’ to deal with the problems of emerging technologies (Moor 2005). I have argued elsewhere for a radical scepticism towards the claim of forecasters and futurologists that ‘policy vacuums’, for example in the case of nanotechnology, might be filled by anticipating them (Horner 2005a). Our knowledge and beliefs about the future are constrained, for example, by the effects of imperfect information, the Oedipus effect, discontinuity effects, and revenge effects (Horner 2004).
In spite of the immense obstacles to what we might reasonably claim to know with any certainty nanotechnology continues to be promoted as, for example, ‘a key technology for the future of Europe’. ‘New technologies, including nanotechnology, may provide a part of the answer of how to create alternative life styles for the population that will be in harmony with the planet (Saxl 2005, p.6). No mean claim! Two key ethical areas are often highlighted: military and medical applications. Just over a quarter of the US nanotechnology research budget in 2005 was consumed by the Department of Defence. Other ethical issues are raised by the vision of a ‘nanomedicine’ devoted not merely to ameliorative medical treatment but to the improvement of human performance. Could nanotechnology be the gateway to a new Eugenics? The future-oriented literature on nanotechnology variously conceives it as ranging from revolutionary to incremental, from the scientifically possible to the impossible, from the deterministic to the indeterminate, and ranges from the pessimistic to optimistic regarding consequences (Horner, 2005b). The outcomes are undecided yet there is a sense of definiteness about benefits as about ethical concerns.
But it seems to me there are two areas of general ethical concern which need to be addressed or taken into account in any framework of anticipatory moral assessment. Firstly there are questions about just when is it ethical to forecast or make knowledge claims about the future i.e. the ethics of forecasting? Certainly a knowledge claim about say the consequences of nanomedicine may turn out to be mistaken without being necessarily ethically improper. But a forecast may only be properly made if it is made on the basis of sufficient knowledge, experience and evidence. It must surely be improper if those conditions are lacking (Toulmin, 1969, p.183). Similarly the frailty of our grasp upon the future raises the paradox of moral luck (Williams, 1981). If the success or otherwise of our (moral) decisions is contingent and unforeseeable, dependent on events beyond our control, then results may be a matter of luck. In other words if the outcomes are beyond our knowledge and control then we can’t be held responsible for them. But it is a central plank of moral theory that moral agency and judgement must be immune to luck. Finally, forecasts often serve other functions which might be described as ‘the ‘performative’ or ideological. Prophecy is often a species of ‘moral futurism’: it will be so therefore it ought to be so (Popper 1977, p.205).
Secondly, there are questions about forecasting ethics i.e. just how values will change in the future. We cannot simply assume that our current moral assessments will continue to obtain. The values and moral vocabulary that we do in fact have are the outcome of, and have evolved to meet, past human predicaments. The application of moral concepts and principles to new situations shaped by radical new technologies may be a matter of decision rather than definition; decisions which cannot be made before the event. Just how can we know what future generations may or may not value? It is difficult to see how we might address our moral obligations to future people if we really are unable to forecast accurately the consequences of the adoption of radical technologies (Parfit 1985). This value uncertainty is linked to the ways in which in retrospect our moral assessments change (Grayling 1997). Moral values change over time and those who promote revolutionary conceptions of technological change (‘the nanotechnological revolution’) in particular suggest that values often shift quite rapidly. The emergence of new facts and events change our valuations – the defeasibility of past moral judgements. Our current understanding of the potential and moral implications of ‘nanobots’ may change radically in the light of new threats or opportunities.
The argument of this paper is that any ‘better ethics’ needs to take into account both the problems of the ethics of forecasting and the forecasting of ethics and the need to take seriously the problem of ‘moral ignorance’ in thinking about the future.
GRAYLING, A.C. (1997) The future of moral values. London: Phoenix.
HORNER, D.S. (2004) The error of futurism: prediction and computer ethics. ACM SIGCAS Computers and Society. January 2004, 32, (7).
HORNER, D.S. (2005a) Policies for a ‘nanosociety’: Can we learn now from our future mistakes? In: G. Collste, S.O. Hansson, S. Rogerson and T.W. Bynum, eds. Looking back to the future: The ETHICOMP Decade 1995 – 2005. 12th – 15th September 2005, Linköping University, Sweden [CD-ROM]. Linköping: Centre for Applied Ethics, Linköping University.
HORNER, D.S. (2005b) Anticipating ethical challenges: Is there a coming era of nanotechnology? Ethics and Information Technology, 7, pp.127 – 138.
MOOR, J. (2005) Why we need better ethics for emerging technologies. Paper presented at Ethics of New Information technology. International Conference of Computer Ethics: Philosophical Enquiry, July 17 – 19, 2005, University of Twente, Enschede, The Netherlands.
PARFIT, D. (1985) Reasons and persons. Oxford: Oxford University Press.
POPPER, K. (1977) The Open Society and its enemies: Volume 2 Hegel and Marx . London: Routledge.
SAXL, O. (2005) Nanotechnology – a key technology for the future of Europe. [Report] Institute of Nanotechnology, UK, for the European Commission Expert Group on Key Technologies for Europe, July 2005.
TOULMIN, S. (1969) The uses of argument. Cambridge: Cambridge University Press.
WILLIAMS, B. (1981) Moral luck: philosophical papers 1973 – 1980. Cambridge: Cambridge University Press.