AUTHOR
Richard Volkman
Southern Connecticut State University
Dept. of Philosophy
Research Center on Computing and Society
New Haven, CT USA
ABSTRACT
Humans are limited creatures. We are limited in our knowledge, we are limited in our power, and, perhaps most distressingly, we are limited in our goodness. This is surely beyond doubt. But it is equally certain that we must not resign ourselves to our own failings. To the contrary, it is a mark of human excellence to never regard one’s own shortcomings with indifference or resignation. Unfortunately, there is a deep tension between these two undeniable truths. On the one hand, we must pursue our own ideal of perfection and strive to become godlike in wisdom, power, and goodness. On the other hand, we must recognize the dangers of overreaching, of playing God. Even as we have rightly cast aside the theological assumptions that once grounded this humility, there is still widespread recognition of the dangers of overreaching.
The Computer Revolution has brought with it a further urgency that we must once and for all learn to live within our limits. Through robotics, advanced artificial intelligence, nanotechnology, and genetic engineering, we are perhaps reaching a point where playing God has become more than a literary device. This has led even some leading architects of the Computer Revolution, including Bill Joy, to call for a moratorium on the further development of such technology. These critics maintain that it is hubris to believe we know enough or are responsible enough to pursue such technologies; we can and must stop the tide of technological change before it is too late.
I argue that this is wrong. “Technological determinism”-the view that what can be done will be done-follows ineluctably from our own “technological hubris.” Technological hubris, in turn, is not an accidental feature of the human condition. Rather, it is an expression of an ethical need to transcend limits, to improve ourselves, to correct our own failings. If we choose to embrace our ability to know and understand, then it is ethically inevitable that we shall act on the knowledge that results. That is, it would be unethical to recognize a means of improvement, but not to pursue it. Consequently, we cannot ethically deny ourselves technology without denying ourselves knowledge and understanding. Since it is precisely this ability to know and understand that locates us in the natural order of things-that fixes our place in the Great Chain of Being, so to speak-we cannot deny ourselves knowledge and understanding without behaving contrary to our own excellence. At the same time, however, we recognize that we are not atop the Great Chain of Being. We are not perfect, even while our peculiar nature gives us the means to achieving greater perfection. Herein lies the paradox. Beings that know their place in the universe cannot consistently will to respect and keep their place in the universe.
This paradox speaks through a long literary tradition that warns against succumbing to the temptations of technology and our own cleverness. In a sense, concern about the ethics of technology has been with us since ancient times; the history of Computer Ethics begins with Aeschylus, not Weiner. In the stories of Prometheus, Faust, Icarus, Frankenstein, and Genesis, we are at once informed of the powerful temptation to embrace knowledge and technology, and warned against the tragic consequences of such overreaching. These lessons are echoed in more recent works such as Blade Runner and The Matrix. We find ourselves both admiring and disparaging the Promethean figure, whose only crime is to aspire to a greatness beyond that permitted him.
We admire the virtues of creativity and inquisitiveness, while deploring the vicious hubris that presumes we have the wisdom to play God. There is no doubt that we do not have such wisdom. After all, there is no way to appreciate the full consequences of our actions when, as Bill Joy points out, “The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.” We are not that smart. No one is that smart. Certain contemporary works, such as Blade Runner, suggest that even God himself is not smart enough to play God.
As the literary evidence suggests, the tension between recognizing our own limits and attempting to transcend them is a permanent part of the human condition. While it is not logically impossible to turn our backs on this part of our nature-indeed, it is certain that some individuals and perhaps some whole cultures have accomplished just that-it is nonetheless ethically impossible to recognize a means to improvement and not pursue it. Death and ignorance and illness and poverty are evils that it is hoped we can alleviate through technological means. Are these benefits worth the risks?
It is hubris again to suppose we know the answer to this question, and this is why it is ultimately wrong to pursue the moratorium on technology advocated by Bill Joy, Kirkpatrick Sale, or the Unabomber. Instead of eschewing technology as a means to greater human perfection, we are better off intelligently managing the dangers. Since the worst consequences of human arrogance result when one person or group of persons is able to impose a vision of perfection on the rest of us, our best defense against these consequences is to embrace free, open, and decentralized systems for developing technologies. By empowering individuals, not states, and certainly not technocratic “experts,” we not only avoid the pitfalls of permitting one error to harm us all, but we also ensure that fewer errors will be made, since such a decentralized system of free choices will permit the greatest possible harnessing of local knowledge. While it is certainly arrogant for any particular individual to suppose that she has the knowledge to determine the best outcomes for the whole human species or even some significant part of it, the wisdom of the whole system of free individual choices is our best insurance against becoming an exclamation point at the end of a grand Promethean tragedy.