It is well-known how important the concept of “ignorance” is in the history of philosophy: It is enough to mention Socrates’ awareness of not-knowing, Nicholas of Cusa’s De docta ignorantia, and in more recent times Friedrich Hayek’s work and his appeal to British empiricists such as John Locke and John Stuart Mill.
This very concept of ignorance, however, has seldom been discussed by philosophers dealing with technology. Indeed, they mostly speak about technology in terms of knowledge and power, and, more specifically, of savoir-faire. Even when a Heideggerian approach to the whole issue is adopted – by denouncing the fate of a civilization driven by the blind force of technology – the crucial role of ignorance is significantly lost.
My aim here is to study technology and hence the possibility suggested by the topic of our Ethicomp meeting – living, working, and learning beyond technology – following an evolutionary approach. Technology is only one of the fundamental ways through which we adapt to our environment by reducing its complexity. Whatever informational complexity may be found in technology, it is necessarily smaller than the environmental complexity that technology tackles. Consequently, the purpose of the paper is to analyze this structural ignorance which lies behind technology, according to a twofold perspective.
First of all, I look at what could be called specific ignorance: this occurs when we know that we do not know a lot of things about definite (side-)effects of technologies such as bio-devices like OGM, or mobile phones and radio-technologies. It suffices to mention the five “big challenges” pointed out by fourteen renowned experts in nano-technology and security on “Nature” in November 2006, or the six points stressed by the International Commission for Electromagnetic Safety in September 2006. These are good examples of ignoring specific facets of a far more complex problem which involves technology.
Secondly, it is crucial to examine what could be called a-specific ignorance, namely when we ignore even what we do not know. This was the case, say, of CFCs in our fridges – in use since the 30s and outlawed only in the 80s and 90s – or of the use of asbestos in buildings (banned in the UE but still used in Canada as well as in the U.S.).
The paradox of ignorance, both specific and aspecific, has produced some remarkable theoretical outputs: for instance, Plato’s dialectics or Ferguson’s theory of social evolution. More recently, the fundamental role of ignorance in technology has been deepened by research on science assessment and even on the reasons why civilizations collapse. By considering both philosophical and scientific approaches to the issue, it is therefore clear why we are driven to reflect and, so, to live, work, or learn, beyond technology. From an evolutionary perspective, the aim is always to reduce the complexity of the environment that, however, transcends human capacities in compressing all the information required by this continuous adaptive attempt. Whatever kind of ignorance we have to cope with – because of the asymmetry between environmental and systemic information, even technologically processed – it hence becomes necessary to focus on a paramount issue in ethics and, more specifically, in computer ethics.
The final section of the paper analyzes the role of ignorance in our decisions, by considering the connection between the principle of prevention and/or precaution and the principle of openness. On one hand, we should prevent action because of our ignorance; on the other hand, ignorance suggests we should engage in action. So, what tells the former principle from the latter cannot be ignorance itself since sometimes it is better to be cautious, especially when we know what we are not knowing and, other times, it simply works the other way round. Thus, what allows us to strike a balance between these two principles is the amount of information involved in our decisions. While the principle of prevention is to be applied when it is likely that a technological choice would reduce the complexity of the system, e.g., by killing human beings or destroying the biological support of our societies, on the contrary, the principle of openness should be applied when ignorance is a mere pretext to limit the potentialities of any given new technology. A good example of the principle of openness was offered ten years ago (1998), when the U.S. Supreme Court declared unconstitutional part of the Communications Decency Act “due to the particular nature of the mean,” i.e., the Internet. The right equilibrium between the principle of openness and the precautionary principle has yet to be found in many cases: All in all, we do know there are no magic bullets toward such an equilibrium. But, thanks to this known unknown, it is however clear why we are already moving beyond technology. The issue at hand is ignorance and its ethical role in evolution.