AUTHOR
David Sanford Horner (UK)
ABSTRACT
In a previous Ethicomp paper I contrasted models of individual decision making in information ethics and argued for an incremental approach as a framework for understanding the relationship between ethics and action (Horner, 1999). In this paper I want to consider the ways in which we ought to make rational, ethical and collective, strategic decisions. How can we promote the ethical control of increasingly complex and large scale information and communication technologies? The paper argues that ‘justificationist’ approaches to strategic decision making fail and that a ‘fallibilist’ methodology is required for making strategic and ethically grounded decisions under ignorance.
Simon Rogerson suggests that an important aim of Computer Ethics is to reduce the probability of the unforeseen effects of computer technologies (Rogerson, 2002, p.160). Similarly Terrell Ward Bynum claims that the overall goal of Computer Ethics is to integrate human values into the evolution of the technology in such a way that it protects rather than damages human values (Bynum, 1997). It seems to me that these aspirations come up against a fundamental dilemma, at least at the macro-level, in making ethical public policy decisions. This dilemma, following Collingridge (1980), might be called ‘the dilemma of ethical control’. The first horn of the dilemma is that at an early stage in the evolution of a technology, when it may be more easily regulated and controlled, we rarely know enough about its harmful consequences to justify regulation. The second horn of the dilemma is that once the harmful consequences are known it is often too late to take effective ameliorating action or such action is slow and costly.
The traditional response to this dilemma is to seek to forecast probable unwanted social and moral impacts. For example Moor (1985) suggests that it is precisely the job of computer ethicists to formulate and justify policies for the ethical use of computer technologies. However, I want to argue that the search for justification is deeply problematic on empirical, logical and moral grounds. At the core of the search for justification is the claim that a decision is rational to the extent that it can be justified by the decision makers (Collingridge, 1987, p.117). From an empirical point of view the demands of traditional, rational models of strategic decision making cannot be met. Such models assume perfect information or a level of information for which time and costs of collection are prohibitive. The logical problem is the paradox of rationality – full information only relates to the past never the future. Choices for the future are made under conditions of ignorance (Hayward and Preston, 1999).
This poses particular problems for approaches to decision making which adopt a utilitarian approach to public values. If we cannot predict the consequences of big decisions especially in the case of the planning of large scale technological developments (national air traffic control systems, or health information systems, for example) then any attempt to justify decisions on the basis of some calculus of utilities is doomed to failure. In addition justification for particular values may prove elusive for a number of reasons; values founder on conflict or disagreement, preferences may change and be subject to different interpretations. Finally in the search for justification of particular value judgements or preference by reference to further moral principles there is the danger of an infinite regress of values (Collingridge, 1980, p.162).
The situation we arrive at with respect to conditions of ignorance and the making of ethical decisions was foreshadowed in the work of the British Intuitionist philosopher W.D.Ross. He argued that in any given case it is possible to perform actions which are right in one or other of four senses:
(a) an act which is in fact right in the situation as it is in fact is;
(b) an act which the agent thinks right in the situation as it in fact
is;
(c) an act which is in fact right in the situation as the agent thinks
it to be;
(d) an act which the agent thinks right in the situation as he thinks it
to be. (Hudson, 1983, p.94)
Ross held that we can only have a duty to perform (d) since the information requirements for (a) to (c) are unobtainable (in practice even if not in principle). In a strong sense we cannot therefore ‘know’ what the right action might be. However, clearly, public policy or organisational strategies need to be formulated on ethical grounds.
The usefulness of Ross’s position is that it maintains an important distinction between what really is the case and what is merely thought to be the case. Dogmatic policy makers with their armouries of forecasting techniques claim to know what is the case. However, we should, as we have seen, be sceptical of such claims to justification. In response, I will argue that we ought to adopt a’fallibilist’, critical methodology. Since none of our values or preferences can be justified then they should all be open to criticism (Collingridge, 1987 p.125). This requires the constant testing of preferred values with morally relevant facts.
REFERENCES
Bynum, T.W., (1997), Global information ethics and the information
revolution. In: T.W.Bynum and J.H. Moor, eds. The digital phoenix: how computers are changing philosophy. Oxford: Blackwell, 1997, pp. 274 – 291.
Collingridge, D., (1980), The social control of technology. London: Pinter.
Collingridge, D., (1987), Criticism – its philosophical structure. Lanham: University Press of America.
Hayward, T. and Preston, J., (1999), Chaos theory, economics and information: implications for strategic decision-making. Journal of
Information Science, 25 (3), pp. 173 – 182.
Horner, D.S., (1999), Perfection and the idea of moral progress: decision making in information ethics. In: A.D’Atri, et al., Proceedings of Ethicomp ’99, Looking to the Future of the Information Society. Luiss Guido Carli University, Rome, 6 ñ 8 October 1999. [CD Rom] Rome: Luiss Guido Carli.
Hudson, W.D., (1983), Modern moral philosophy. 2nd ed. London: Macmillan. Moor, J., (1985), What is computer ethics? Metaphilosophy, 16, pp. 226 – 275.
Rogerson, S., (2002), Computers and society. In: R.E. Spier, ed. Science and techology ethics. London: Routledge.