Artificial intelligence begins to grow up By Terrell Ward Bynum

In the 1950s and 1960s, overconfident computer scientists, who were working in the new field of “artificial intelligence” (AI), predicted that computerized robots and other “artificially intelligent” devices soon would catch up to – or even surpass – human beings in a wide diversity of skills and activities. Within two or three decades, said the most optimistic AI researchers, artificially intelligent devices would be able to pass the famous “Turing Test” by conversing with humans using teletype machines so successfully that the humans would not realize that they were talking to computers rather than to people. Some predictions were quite remarkable: robots would be on the moon or Mars or other planets, deciding for themselves where to go and what to do, carrying on conversations with humans back on earth – robot butlers would fetch and carry objects, cook meals, and clean the house for their human owners – AI doctors would diagnose diseases, prescribe and administer medicines and therapies. At the peak of this optimism in the 1970s, a number of AI companies were founded and invested many millions of dollars in various AI projects. By the end of the 1980s, however, none of the optimistic predictions had come true, and most of the AI companies had gone out of business. An “AI Winter” had set in!

Two recent articles in the New York Times, however, indicate that an “AI Spring” is emerging, because the past few years have witnessed startling progress in artificial intelligence developments. One of these articles is “Intelligent Beings in Space!” by Kenneth Chang (NYT, May 30, 2006, pages F1 and F4), and the other is “Brainy Robots Start Stepping into Daily Life” [a front-page story!] by John Markoff (NYT, July 18, 2006, pages A1 and C4). According to these articles, faster, more powerful, and cheaper computers, plus dramatically better understanding of how the human brain functions, have resulted in computerized devices that begin to fulfill some of the optimistic predictions of the 1950s and 1960s. As explained by John Markoff:

scientists say that after a lull, artificial intelligence has rapidly grown far more sophisticated. Today some scientists are beginning to use the term cognitive computing, to distinguish their research from an earlier generation of artificial intelligence work. What sets the new researchers apart is a wealth of new biological data on how the human brain functions.

The two newspaper articles describe a number of new artificially intelligent devices that already exist or are on the drawing boards. For example, the two United States “Mars rovers” – the Spirit and the Opportunity – are currently moving around on Mars, making decisions on their own about where and how to travel from place to place, when to engage in experiments and observations, and which observations to send back to earth for further analysis. Another example is the Earth-Observing-1 satellite, which reprograms itself to take account of new targets to observe, rearranges its schedule of tasks to make room for the observations, then informs scientists on earth what it has done. One planet-roving explorer currently on NASA’s drawing board would crawl around on rough terrain, learning from experience like a child in a playpen. As Kenneth Chang explains in his article:

To achieve those abilities, the machine would need sensors to observe its surroundings and then use the best mode of locomotion. While some safety rules might be explicitly programmed – the equivalent of telling a child “Do not cross a busy road” – the scientists also will put in programming that allows the robot to learn its behavior through trial and error.

One possible space mission that NASA is working on is a blimp that would float above Saturn’s moon Titan surveying the planet’s surface below and taking various scientific measurements to send back to earth. A message from Titan to Earth would take one and a half hours to reach its destination, and an answer would also take the same time. In such a circumstance – three hours between questions and answers – the blimp would have to make and carry out many decisions on its own, rather than getting orders from earth. Other examples discussed in the two articles include an artificially intelligent car that traveled 132 miles on a desert road in October 2005 without human intervention, and an electronic life-guard assistant with an underwater visual system to spot drowning swimmers.

These and many other new developments in artificial intelligence raise anew some ethical questions that the American philosopher/scientist Norbert Wiener raised in 1950 in his book The Human Use of Human Beings: When humans begin to build machines that make decisions and carry them out on their own, what ethical rules or values should they uphold, and how can we construct them in a way that will guarantee that they behave like we want them to behave? We need a “robot ethics” before we travel very far down the road of artificial intelligence!

Two contemporary scholars who have recently considered these and other important issues are Luciano Floridi and J. W. Sanders in their article “On the Morality of Artificial Agents” in Minds and Machines, 2004, 14:3, pp. 349-379.