The idea of the technological singularity – the moment at which intelligence embedded in silicon surpasses human intelligence – is a matter of great interest and fascination. To the mind of a layperson, it is at once a source of wonder and apprehension. To those adept in the areas of technology and artificial intelligence, it is almost irresistibly attractive. One the other hand, it is an idea that rests on several assumptions about the nature of human intelligence that are problematic and have long been subject of debate.
This paper discusses the major proposals, originating mainly in the artificial intelligence community, concerning the nature of the technological singularity, its inevitability, and the stages of progress toward the event itself. Attention is given to the problems raised by the concept of the singularity and the controversy that has surrounded the charting of milestones on the path to its realization.
Defining the Technological Singularity
The technological singularity is best defined as a point in time when a combination of computer hardware and artificial intelligence algorithms match or exceed the computational ability of the human brain. In defining this event, great emphasis is placed on the importance of advances in computational potential as well as in artificial intelligence and modeling techniques. It is proposed that such an event would have a staggering effect on humanity to an extent that is difficult, if not impossible, to predict. When this point has been reached, the concept of “recursive self-improvement” would allow technology to improve upon its own level of intelligence at a perpetually accelerated pace.
Difficulties in Pinpointing the Singularity and Its Milestones
One of the largest challenges in defining the technological singularity is that it is not an immediately measureable and instant event. (For the purpose of this abstract, however, let us refer to the singularity as an event, even though estimates of its occurrence are always expressed in terms of an interval of time.) Advances in both hardware and software must be coordinated in a manner that allows artificial intelligence to supersede human intellect. Thus, identifying and measuring the events leading to this point is a nontrivial task. In a series of articles and books, Ray Kurzweil has made a multitude (147 at last count) of predictions that provide some guidance for measuring progress toward the technological singularity. Although most of these estimates do not consist of steps taken explicitly or directly toward the event, they define advancements that are side effects of technological milestones along the way.
The Hardware Problem
In order to reach the technological singularity, humanity must be capable of producing computer hardware that can match or exceed the computational power of the human brain. Many feel that progress in nanotechnology will pave the way for this outcome. There are several projections as to the number of computations per second and the amount of memory required to reach this computational ability. Moore’s Law is often invoked in reference to the timeline for development of processors with the necessary capabilities and Kurzweil has made several bold statements that suggest that this law is applicable beyond the domain of integrated circuitry into the realm of artificial intelligence.
The Software Problem
Computer software is also a limiting factor to the eventuality of the technological singularity. In order to achieve superhuman intelligence as conceived in the definition of the singularity, efficient software capable of modeling and emulating every element of the human brain must be constructed and operate properly. Kurzweil claims that while this is a significant challenge, it will be completed within a reasonable period of time. This is a view with which Vernor Vinge disagrees citing scalability problems within the field of software engineering. The compatibility of the projected software with the targeted advanced hardware is also a matter of concern.
Reconciling a Miscellany of Predictions
Predictions as to the timing and nature of the technological singularity have been made by Venor Vinge, Nick Bostron, Hans Moravec, and Ray Kurzweil. These are evaluated and their merits and deficiencies considered. Several of these predictive models of the technological singularity use similar metrics in their attempt at formulating a target time period for the event. In this section, differences in the predicted trajectory that may be the results of small variances in base assumptions related to time-biased inaccuracies are discussed. Recalculating the predictions with best current figures may provide a more consistent set of singularity timeframe estimates or may reveal fundamental inconsistencies in the assumptions on which these estimates are predicated.
Some Discrepant Views of the Singularity
The possibility of an event like the technological singularity rests on the assumption that all human intelligence is reducible to computing power and that humanity will learn enough about the function of the human mind to “build one” in silicon. This is a view with which many thinkers, including reputable computer scientists like Joseph Weizenbaum, have taken strenuous issue. Thus, in Computer Power and Human Reason, he asks, “What is it about the computer that has brought the view of man as machine to a new level of plausibility? … Ultimately a line dividing human and machine intelligence must be drawn. If there is no such line, then advocates of computerized psychotherapy may be merely heralds of an age in which man has finally been recognized as nothing but a clock-work.” This section explores Weizenbaum’s question through a review of the chronology, elements, and participants in this controversy.
There is an understandable tension between enthusiastic projections of the advance of the techniques of artificial intelligence and the sober recognition of real limitations in our current understanding of human intelligence. This highlights the importance of making ethical and responsible choices with regard to care in formulating further predictions based advances in this area of computing. This is underscored by Weizenbaum’s contention that, “The computer professional … has an enormously important responsibility to be modest in his claims.” Failure to do so in this particular area of interest has the potential to generate unrealistic expectations not only within the field, but also through sensational treatment by the media, in the population as a whole.
Bostrom, N. 1998. How Long Before Superintelligence? International Journal of Futures Studies, Vol. 2, http://www.nickbostrom.com/superintelligence.html.
Kurzweil, R. 2010. How My Predictions Are Faring. Kurzweil Accelerating Inteligence. http://www.kurzweilai.net/predictions/download.php.
Kurzweil, R. 2000. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Group, New York.
Kurzweil, R. 2006. The Singularity is Near: When Humans Transcend Biology. Penguin Group, New York.
Minsky, M. 1994. Will robots inherit the earth? Scientific American 271(4): 108-11.
Moravec, H. 1998. When Will Computer Hardware Match the Human Brain?. Journal of Transhumanism, Vol. 1, http://www.transhumanist.corn/volume1/moravec.htm.
Vinge, V. 1993. Technological singularity. VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, http://www.frc.ri.cmu.edu/»hpm/book98/com.ch1/vinge.singularity.html.
Weizenbaum, J. 1976. Computer Power and Human Reason. W.H Freeman and Company, San Francisco.
Weizenbaum, J. 1972. On the Impact of the Computer on Society: How does one insult a machine? Science, Vol. 176: 609-14