Technology and Control under (Moral) Uncertainty

Paul Sollie


This paper is part of initial explorations for a research called “Towards an ethics of Information and Communication Technology (ICT): Controlling Technology Development.” This is part of the broader research project “The ethical aspects of Towards Ultrafast Communication (TUC).” This kind of research aiming at improving means of communication does not seem very risky in itself, let alone to raise any concrete moral dilemmas. However, what about future consequences and applications of TUC? Ethical aspects, impacts and future consequences of this development in particular and technology development in general are often not apparent at the outset. A common feature of complex technologies is that whilst under design, one is ignorant and uncertain of possible applications and consequences of technologies. Consider for example the rise of Internet. ARPANET, internet’s predecessor, designed for secluded academic and military information transfer, has strayed off from its original purpose into a multipurpose worldwide network. The telephone might serve as another example. Alexander Graham Bell designed the telephone for commercial use and stated that he could not imagine that people would ever use this device for private purposes. He could not have been proved more wrong: nowadays, most people have cell-phones and can be contacted 24/7.

A common feature of these technologies is that whilst under design, one is ignorant and uncertain of possible applications of these technologies and therefore of its consequences – I will focus on technologies of which the developments proceeds in a complex matter and is hard to predict, like in the examples mentioned above. The uncertainty of future consequences not only relates to uses of these technologies through time, but also to the impact of technologies on society. Although technology is easily one of the most permeating and consequential features of modern society, surprisingly, an ethics of technology is still in its infancy. Important reasons for this ‘underdevelopment’ of a methodology for morally evaluating technology development are related to its complex, uncertain, dynamic and large-scale character, which seems to resist human control. The (moral) uncertainty surrounding technology development is one of the problems an ethics of technology must tackle. The leading question of my TUC-research on a methodology for evaluating technology development is ‘How to formulate a methodological design for an ethics of technology that is able to deal with the complexity of aspects concerning technology development?’ In this paper I will elaborate on one issue of this larger field of problems, which is known as Collingridge’s Control Dilemma, and its implications for an ethics of technology.

Technology development has always been accompanied by promises of social and human progress, promises of an ever-increasing controlled improvement of society. These promises include, e.g., an increase in quality of life, safety, comfort, autonomy, access to information, but if the development of technologies is in itself beyond control, it is impossible to ascertain these promises. The situation which we have to face concerning the control of technology development has been described by sociologist David Collingridge:

“Attempting to control a technology is difficult, and not rarely impossible, because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow.” (Collingridge, The Social Control of Technology, 1980: 19)

I argue that Collingridge’s Control Dilemma is not a dilemma after all and that, as a result, I will be able to do justice to both sides of his control problem. On basis of this analysis I discuss some fundamental problems an ethics of technology must deal with. Central to the control ‘dilemma’ is the lack of information for predicting and controlling the ramifications of technology development. This poses serious problems for an ethics of technology, because moral evaluation traditionally proceeds on the basis of reliable information. Besides the fact that information is lacking for controlling future consequences of technology, the challenge for an ethics of technology is complicated by the fact that the progress of technology from design to black-boxed product seems not to evolve according to unequivocal linear causality and is surrounded by different factors of uncertainty, e.g. complexity, dynamics, and intransparance. Moreover, technology development is not apt for moral evaluation in traditional sense, because we are not just under uncertainty of future consequences, impacts, and applications, but we are under moral uncertainty of how to deal with pluralist considerations of technology appraisal. By formulating these problems we come across the limits of traditional ethics. To sum up, this paper will discuss and analyse the question of controlling technology development and its subsequent problems for the quest of an ethics of technology.