What Is Computer Ethics?

Introductions To Computer Ethics Table Of Contents

What is Computer Ethics?

* This article first appeared in Terrell Ward Bynum, ed., Computers & Ethics, Blackwell, 1985, pp.266 – 75. (A special issue of the journal Metaphilosophy.)

A Proposed Definition

Computers are special technology and they raise some special ethical issues. In this essay I will discuss what makes computers different from other technology and how this difference makes a difference in ethical considerations. In particular, I want to characterize computer ethics and show why this emerging field is both intellectually interesting and enormously important.

On my view, computer ethics is the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. I use the phrase “computer technology” because I take the subject matter of the field broadly to include computers and associated technology. For instance, I include concerns about software as well as hardware and concerns about networks connecting computers as well as computers themselves.

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology.

Now it may seem that all that needs to be done is the mechanical application of an ethical theory to generate the appropriate policy. But this is usually not possible. A difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis which provides a coherent conceptual framework within which to formulate a policy for action. Indeed, much of the important work in computer ethics is devoted to proposing conceptual frameworks for understanding ethical problems involving computer technology.

An example may help to clarify the kind of conceptual work that is required. Let’s suppose we are trying to formulate a policy for protecting computer programs. Initially, the idea may seem clear enough. We are looking for a policy for protecting a kind of intellectual property. But then a number of questions which do not have obvious answers emerge. What is a computer program? Is it really intellectual property which can be owned or is it more like an idea, an algorithm, which is not owned by anybody? If a computer program is intellectual property, is it an expression of an idea that is owned (traditionally protectable by copyright) or is it a process that is owned (traditionally protectable by patent)? Is a machine-readable program a copy of a human-readable program? Clearly, we need a conceptualization of the nature of a computer program in order to answer these kinds of questions. Moreover, these questions must be answered in order to formulate a useful policy for protecting computer programs. Notice that the conceptualization we pick will not only affect how a policy will be applied but to a certain extent what the facts are. For instance, in this case the conceptualization will determine when programs count as instances of the same program.

Even within a coherent conceptual framework, the formulation of a policy for using computer technology can be difficult. As we consider different policies we discover something about what we value and what we don’t. Because computer technology provides us with new possibilities for acting, new values emerge. For example, creating software has value in our culture which it didn’t have a few decades ago. And old values have to be reconsidered. For instance, assuming software is intellectual property, why should intellectual property be protected? In general, the consideration of alternative policies forces us to discover and make explicit what our value preferences are.

The mark of a basic problem in computer ethics is one in which computer technology is essentially involved and there is an uncertainty about what to do and even about how to understand the situation. Hence, not all ethical situations involving computers are central to computer ethics. If a burglar steals available office equipment including computers, then the burglar has done something legally and ethically wrong. But this is really an issue for general law and ethics. Computers are only accidently involved in this situation, and there is no policy or conceptual vacuum to fill. The situation and the applicable policy are clear.

In one sense I am arguing for the special status of computer ethics as a field of study. Applied ethics is not simply ethics applied. But, I also wish to stress the underlying importance of general ethics and science to computer ethics. Ethical theory provides categories and procedures for determining what is ethically relevant. For example, what kinds of things are good? What are our basic rights? What is an impartial point of view? These considerations are essential in comparing and justifying policies for ethical conduct. Similarly, scientific information is crucial in ethical evaluations. It is amazing how many times ethical disputes turn not on disagreements about values but on disagreements about facts.

On my view, computer ethics is a dynamic and complex field of study which considers the relationships among facts, conceptualizations, policies and values with regard to constantly changing computer technology. Computer ethics is not a fixed set of rules which one shellacs and hangs on the wall. Nor is computer ethics the rote application of ethical principles to a value-free technology. Computer ethics requires us to think anew about the nature of computer technology and our values. Although computer ethics is a field between science and ethics and depends on them, it is also a discipline in its own right which provides both conceptualizations for understanding and policies for using computer technology.

Though I have indicated some of the intellectually interesting features of computer ethics, I have not said much about the problems of the field or about its practical importance. The only example I have used so far is the issue of protecting computer programs which may seem to be a very narrow concern. In fact, I believe the domain of computer ethics is quite large and extends to issues which affect all of us. Now I want to turn to a consideration of these issues and argue for the practical importance of computer ethics. I will proceed not by giving a list of problems but rather by analyzing the conditions and forces which generate ethical issues about computer technology. In particular, I want to analyze what is special about computers, what social impact computers will have, and what is operationally suspect about computing technology. I hope to show something of the nature of computer ethics by doing some computer ethics.

The Revolutionary Machine

What is special about computers? It is often said that a Computer Revolution is taking place, but what is it about computers that makes them revolutionary? One difficulty in assessing the revolutionary nature of computers is that the word “revolutionary” has been devalued. Even minor technological improvements are heralded as revolutionary. A manufacturer of a new dripless pouring spout may well promote it as revolutionary. If minor technological improvements are revolutionary, then undoubtedly everchanging computer technology is revolutionary. The interesting issue, of course, is whether there is some nontrivial sense in which computers are revolutionary. What makes computer technology importantly different from other technology? Is there any real basis for comparing the Computer Revolution with the Industrial Revolution?

If we look around for features that make computers revolutionary, several features suggest themselves. For example, in our society computers are affordable and abundant. It is not much of an exaggeration to say that currently in our society every major business, factory, school, bank, and hospital is rushing to utilize computer technology. Millions of personal computers are being sold for home use. Moreover, computers are integral parts of products which don’t look much like computers such as watches and automobiles. Computers are abundant and inexpensive, but so are pencils. Mere abundance and affordability don’t seem sufficient to justify any claim to technological revolution.

One might claim the newness of computers makes them revolutionary. Such a thesis requires qualification. Electronic digital computers have been around for forty years. In fact, if the abacus counts as a computer, then computer technology is among the oldest technologies. A better way to state this claim is that recent engineering advances in computers make them revolutionary. Obviously, computers have been immensely improved over the last forty years. Along with dramatic increases in computer speed and memory there have been dramatic decreases in computer size. Computer manufacturers are quick to point out that desk top computers today exceed the engineering specifications of computers which filled rooms only a few decades ago. There has been also a determined effort by companies to make computer hardware and computer software easier to use. Computers may not be completely user friendly but at least they are much less unfriendly. However, as important as these features are, they don’t seem to get to the heart of the Computer Revolution. Small, fast, powerful and easy-to-use electric can openers are great improvements over earlier can openers, but they aren’t in the relevant sense revolutionary.

Of course, it is important that computers are abundant, less expensive, smaller, faster, and more powerful and friendly. But, these features serve as enabling conditions for the spread of the Computer Revolution. The essence of the Computer Revolution is found in the nature of a computer itself. What is revolutionary about computers is logical malleability. Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs, and connecting logical operations. Logical operations are the precisely defined steps which take a computer from one state to the next. The logic of computers can be massaged and shaped in endless ways through changes in hardware and software. Just as the power of a steam engine was a raw resource of the Industrial Revolution so the logic of a computer is a raw resource of the Computer Revolution. Because logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity. The driving question of the Computer Revolution is “How can we mold the logic of computers to better serve our purposes?”

I think logical malleability explains the already widespread application of computers and hints at the enormous impact computers are destined to have. Understanding the logical malleability of computers is essential to understanding the power of the developing technological revolution. Understanding logical malleability is also important in setting policies for the use of computers. Other ways of conceiving computers serve less well as a basis for formulating and justifying policies for action.

Consider an alternative and popular conception of computers in which computers are understood as number crunchers, i.e., essentially as numerical devices. On this conception computers are nothing but big calculators. It might be maintained on this view that mathematical and scientific applications should take precedence over nonnumerical applications such as word processing. My position, on the contrary, is that computers are logically malleable. The arithmetic interpretation is certainly a correct one, but it is only one among many interpretations. Logical malleability has both a syntactic and a semantic dimension. Syntactically, the logic of computers is malleable in terms of the number and variety of possible states and operations. Semantically, the logic of computers is malleable in that the states of the computer can be taken to represent anything. Computers manipulate symbols but they don’t care what the symbols represent. Thus, there is no ontological basis for giving preference to numerical applications over nonnumerical applications.

The fact that computers can be described in mathematical language, even at a very low level, doesn’t make them essentially numerical. For example, machine language is conveniently and traditionally expressed in 0’s and l’s. But the 0’s and l’s simply designate different physical states. We could label these states as “on” and “off” or “yin” and “yang” and apply binary logic. Obviously, at some levels it is useful to use mathematical notation to describe computer operations, and it is reasonable to use it. The mistake is to reify the mathematical notation as the essence of a computer and then use this conception to make judgments about the appropriate use of computers.

In general, our conceptions of computer technology will affect our policies for using it. I believe the importance of properly conceiving the nature and impact of computer technology will increase as the Computer Revolution unfolds.

Anatomy of the Computer Revolution

Because the Computer Revolution is in progress, it is difficult to get a perspective on its development. By looking at the Industrial Revolution I believe we can get some insight into the nature of a technological revolution. Roughly, the Industrial Revolution in England occurred in two major stages. The first stage was the technological introduction stage which took place during the last half of the Eighteenth Century. During this stage inventions and processes were introduced, tested, and improved. There was an industrialization of limited segments of the economy, particularly in agriculture and textiles. The second stage was the technological permeation stage which took place during the Nineteenth Century. As factory work increased and the populations of cities swelled, not only did well known social evils emerge, but equally significantly corresponding changes in human activities and institutions, ranging from labor unions to health services, occurred. The forces of industrialization dramatically transformed the society.

My conjecture is that the Computer Revolution will follow a similar two stage development. The first stage, the introduction stage, has been occurring during the last forty years. Electronic computers have been created and refined. We are gradually entering the second stage, the permeation stage, in which computer technology will become an integral part of institutions throughout our society. I think that in the coming decades many human activities and social institutions will be transformed by computer technology and that this transforming effect of computerization will raise a wide range of issues for computer ethics.

What I mean by “transformed” is that the basic nature or purpose of an activity or institution is changed. This is marked by the kinds of questions that are asked. During the introduction stage computers are understood as tools for doing standard jobs. A typical question for this stage is “How well does a computer do such and such an activity?” Later, during the permeation stage, computers become an integral part of the activity. A typical question for this stage is “What is the nature and value of such and such an activity?” In our society there is already some evidence of the transforming effect of computerization as marked by the kind of questions being asked.

For example, for years computers have been used to count votes. Now the election process is becoming highly computerized. Computers can be used to count votes and to make projections about the outcome. Television networks use computers both to determine quickly who is winning and to display the results in a technologically impressive manner. During the last presidential election in the United States [1984] the television networks projected the results not only before the polls in California were closed but also before the polls in New York were closed. In fact, voting was still going on in over half the states when the winner was announced. The question is no longer “How efficiently do computers count votes in a fair election?” but “What is a fair election?” Is it appropriate that some people know the outcome before they vote? The problem is that computers not only tabulate the votes for each candidate but likely influence the number and distribution of these votes. For better or worse, our electoral process is being transformed.

As computers permeate more and more of our society, I think we will see more and more of the transforming effect of computers on our basic institutions and practices. Nobody can know for sure how our computerized society will look fifty years from now, but it is reasonable to think that various aspects of our daily work will be transformed. Computers have been used for years by businesses to expedite routine work, such as calculating payrolls; but as personal computers become widespread and allow executives to work at home, and as robots do more and more factory work, the emerging question will be not merely “How well do computers help us work?” but “What is the nature of this work?”

Traditional work may no longer be defined as something that normally happens at a specific time or a specific place. Work for us may become less doing a job than instructing a computer to do a job. As the concept of work begins to change, the values associated with the old concept will have to be reexamined. Executives who work at a computer terminal at home will lose some spontaneous interaction with colleagues. Factory workers who direct robots by pressing buttons may take less pride in a finished product. And similar effects can be expected in other types of work. Commercial pilots who watch computers fly their planes may find their jobs to be different from what they expected.

A further example of the transforming effect of computer technology is found in financial institutions. As the transfer and storage of funds becomes increasingly computerized the question will be not merely “How well do computers count money?” but “What is money?” For instance, in a cashless society in which debits are made to one’s account electronically at the point of sale, has money disappeared in favor of computer records or have electronic impulses become money? What opportunities and values are lost or gained when money becomes intangible?

Still another likely area for the transforming effect of computers is education. Currently, educational packages for computers are rather limited. Now it is quite proper to ask “How well do computers educate?” But as teachers and students exchange more and more information indirectly via computer networks and as computers take over more routine instructional activities, the question will inevitably switch to “What is education?” The values associated with the traditional way of educating will be challenged. How much human contact is necessary or desirable for learning? What is education when computers do the teaching?

The point of this futuristic discussion is to suggest the likely impact of computer technology. Though I don’t know what the details will be, I believe the kind of transformation I am suggesting is likely to occur. This is all I need to support my argument for the practical importance of computer ethics. In brief, the argument is as follows: The revolutionary feature of computers is their logical malleability. Logical malleability assures the enormous application of computer technology. This will bring about the Computer Revolution. During the Computer Revolution many of our human activities and social institutions will be transformed. These transformations will leave us with policy and conceptual vacuums about how to use computer technology. Such policy and conceptual vacuums are the marks of basic problems within computer ethics. Therefore, computer ethics is a field of substantial practical importance.

I find this argument for the practical value of computer ethics convincing. I think it shows that computer ethics is likely to have increasing application in our society. This argument does rest on a vision of the Computer Revolution which not everyone may share. Therefore, I will turn to another argument for the practical importance of computer ethics which doesn’t depend upon any particular view of the Computer Revolution. This argument rests on the invisibility factor and suggests a number of ethical issues confronting computer ethics now.

The Invisibility Factor

There is an important fact about computers. Most of the time and under most conditions computer operations are invisible. One may be quite knowledgeable about the inputs and outputs of a computer and only dimly aware of the internal processing. This invisibility factor often generates policy vacuums about how to use computer technology. Here I will mention three kinds of invisibility which can have ethical significance.

The most obvious kind of invisibility which has ethical significance is invisible abuse. Invisible abuse is the intentional use of the invisible operations of a computer to engage in unethical conduct. A classic example of this is the case of a programmer who realized he could steal excess interest from a bank. When interest on a bank account is calculated, there is often a fraction of a cent left over after rounding off. This programmer instructed a computer to deposit these fractions of a cent to his own account. Although this is an ordinary case of stealing, it is relevant to computer ethics in that computer technology is essentially involved and there is a question about what policy to institute in order to best detect and prevent such abuse. Without access to the program used for stealing the interest or to a sophisticated accounting program such an activity may easily go un-noticed.

Another possibility for invisible abuse is the invasion of the property and privacy of others. A computer can be programmed to contact another computer over phone lines and surreptitiously remove or alter confidential information. Sometimes an inexpensive computer and a telephone hookup is all it takes. A group of teenagers, who named themselves “the 414s” after the Milwaukee telephone exchange, used their home computers to invade a New York hospital, a California bank, and a government nuclear weapons laboratory. These break-ins were done as pranks, but obviously such invasions can be done with malice and be difficult or impossible to detect.

A particularly insidious example of invisible abuse is the use of computers for surveillance. For instance, a company’s central computer can monitor the work done on computer terminals far better and more discreetly than the most dedicated sweatshop manager. Also, computers can be programmed to monitor phone calls and electronic mail without giving any evidence of tampering. A Texas oil company, for example, was baffled why it was always outbid on leasing rights for Alaskan territory until it discovered another bidder was tapping its data transmission lines near its Alaskan computer terminal.

A second variety of the invisibility factor, which is more subtle and conceptually interesting than the first, is the presence of invisible programming values. Invisible programming values are those values which are embedded in a computer program.

Writing a computer program is like building a house. No matter how detailed the specifications may be, a builder must make numerous decisions about matters not specified in order to construct the house. Different houses are compatible with a given set of specifications. Similarly, a request for a computer program is made at a level of abstraction usually far removed from the details of the actual programming language. In order to implement a program which satisfies the specifications a programmer makes some value judgments about what is important and what is not. These values become embedded in the final product and may be invisible to someone who runs the program.

Consider, for example, computerized airline reservations. Many different programs could be written to produce a reservation service. American Airlines once promoted such a service called “SABRE”. This program had a bias for American Airline flights built in so that sometimes an American Airline flight was suggested by the computer even if it was not the best flight available. Indeed, Braniff Airlines, which went into bankruptcy for awhile, sued American Airlines on the grounds that this kind of bias in the reservation service contributed to its financial difficulties.

Although the general use of a biased reservation service is ethically suspicious, a programmer of such a service may or may not be engaged in invisible abuse. There may be a difference between how a programmer intends a program to be used and how it is actually used. Moreover, even if one sets out to create a program for a completely unbiased reservation service, some value judgments are latent in the program because some choices have to be made about how the program operates. Are airlines listed in alphabetical order? Is more than one listed at a time? Are flights just before the time requested listed? For what period after the time requested are flights listed? Some answers, at least implicitly, have to be given to these questions when the program is written. Whatever answers are chosen will build certain values into the program.

Sometimes invisible programming values are so invisible that even the programmers are unaware of them. Programs may have bugs or may be based on implicit assumptions which don’t become obvious until there is a crisis. For example, the operators of the ill-fated Three Mile Island nuclear power plant were trained on a computer which was programmed to simulate possible malfunctions including malfunctions which were dependent on other malfunctions. But, as the Kemeny Commission which investigated the disaster discovered, the simulator was not programmed to generate simultaneous, independent malfunctions. In the actual failure at Three Mile Island the operators were faced with exactly this situation simultaneous, independent malfunctions. The inadequacy of the computer simulation was the result of a programming decision, as unconscious or implicit as that decision may have been. Shortly after the disaster the computer was reprogrammed to simulate situations like the one that did occur at Three Mile Island.

A third variety of the invisibility factor, which is perhaps the most disturbing, is invisible complex calculation. Computers today are capable of enormous calculations beyond human comprehension. Even if a program is understood, it does not follow that the calculations based on that program are understood. Computers today perform, and certainly supercomputers in the future will perform, calculations which are too complex for human inspection and understanding.

An interesting example of such complex calculation occurred in 1976 when a computer worked on the four color conjecture. The four color problem, a puzzle mathematicians have worked on for over a century is to show that a map can be colored with at most four colors so that no adjacent areas have the same color. Mathematicians at the University of Illinois broke the problem down into thousands of cases and programmed computers to consider them. After more than a thousand hours of computer time on various computers, the four color conjecture was proved correct. What is interesting about this mathematical proof, compared to traditional proofs, is that it is largely invisible. The general structure of the proof is known and found in the program and any particular part of the computer’s activity can be examined, but practically speaking the calculations are too enormous for humans to examine them all.

The issue is how much we should trust a computer’s invisible calculations. This becomes a significant ethical issue as the consequences grow in importance. For instance, computers are used by the military in making decisions about launching nuclear weapons. On the one hand, computers are fallible and there may not be time to confirm their assessment of the situation. On the other hand, making decisions about launching nuclear weapons without using computers may be even more fallible and more dangerous. What should be our policy about trusting invisible calculations?

A partial solution to the invisibility problem may lie with computers themselves. One of the strengths of computers is the ability to locate hidden information and display it. Computers can make the invisible visible. Information which is lost in a sea of data can be clearly revealed with the proper computer analysis. But, that’s the catch. We don’t always know when, where, and how to direct the computer’s attention. The invisibility factor presents us with a dilemma. We are happy in one sense that the operations of a computer are invisible. We don’t want to inspect every computerized transaction or program every step for ourselves or watch every computer calculation. In terms of efficiency the invisibility factor is a blessing. But it is just this invisibility that makes us vulnerable. We are open to invisible abuse or invisible programming of inappropriate values or invisible miscalculation. The challenge for computer ethics is to formulate policies which will help us deal with this dilemma. We must decide when to trust computers and when not to trust them. This is another reason why computer ethics is so important.

A Very Short History of Computer Ethics

By Terrell Ward Bynum

[This article was published in the Summer 2000 issue of the American Philosophical Association’s Newsletter on Philosophy and Computing]

The Foundation of Computer Ethics

Computer ethics as a field of study was founded by MIT professor Norbert Wiener during World War Two (early 1940s) while helping to develop an antiaircraft cannon capable of shooting down fast warplanes. One part of the cannon had to “perceive” and track an airplane, then calculate its likely trajectory and “talk” to another part of the cannon to fire the shells. The engineering challenge of this project caused Wiener and some colleagues to create a new branch of science, which Wiener called “cybernetics” – the science of information feedback systems. The concepts of cybernetics, when combined with the digital computers being created at that time, led Wiener to draw some remarkably insightful ethical conclusions. He perceptively foresaw revolutionary social and ethical consequences. In 1948, for example, in his book Cybernetics: or control and communication in the animal and the machine, he said the following:

It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control; and that its input and output need not be in the form of numbers or diagrams but might very well be, respectively, the readings of artificial sense organs, such as photoelectric cells or thermometers, and the performance of motors or solenoids…. we are already in a position to construct artificial machines of almost any degree of elaborateness of performance. Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil. (pp. 27 – 28)

In 1950 Wiener published his monumental computer ethics book, The Human Use of Human Beings, which not only established him as the founder of computer ethics, but far more importantly, laid down a comprehensive computer ethics foundation which remains today – half a century later – a powerful basis for computer ethics research and analysis. (However, he did not use the name “computer ethics” to describe what he was doing.) His book includes (1) an account of the purpose of a human life, (2) four principles of justice, (3) a powerful method for doing applied ethics, (4) discussions of the fundamental questions of computer ethics, and (5) examples of key computer ethics topics. (Wiener 1950/1954, see also Bynum 1999)

Wiener made it clear that, on his view, the integration of computer technology into society will constitute the remaking of society – the “second industrial revolution” – destined to affect every major aspect of life. The computer revolution will be a multifaceted, ongoing process that will take decades of effort and will radically change everything. Such a vast undertaking will necessarily include a wide diversity of tasks and challenges. Workers must adjust to radical changes in the work place; governments must establish new laws and regulations; industry and business must create new policies and practices; professional organizations must develop new codes of conduct for their members; sociologists and psychologists must study and understand new social and psychological phenomena; and philosophers must rethink and redefine old social and ethical concepts.

Neglect, Then a Reawakening

Unfortunately, this complex and important new area of applied ethics, which Wiener founded in the 1940s, remained nearly undeveloped and unexplored until the mid 1960s. By then, important social and ethical consequences of computer technology had already become manifest, and interest in computer-related ethical issues began to grow. Computer-aided bank robberies and other crimes attracted the attention of Donn Parker, who wrote books and articles on computer crime and proposed to the Association for Computing Machinery that they adopt a code of ethics for their members. The ACM appointed Parker to head a committee to create such a code, which was adopted by that professional organization in 1973. (The ACM Code was revised in the early 1980s and again in the early 1990s.)

Also in the mid 1960s, computer-enabled invasions of privacy by “big-brother” government agencies became a public worry and led to books, articles, government studies, and proposed privacy legislation. By the mid 1970s, new privacy laws and computer crime laws had been enacted in America and in Europe, and organizations of computer professionals were adopting codes of conduct for their members. At the same time, MIT computer scientist Joseph Weizenbaum created a computer program called ELIZA, intended to crudely simulate “a Rogerian psychotherapist engaged in an initial interview with a patient.” Weizenbaum was appalled by the reaction that people had to his simple computer program. Some psychiatrists, for example, viewed his results as evidence that computers will soon provide automated psychotherapy; and certain students and staff at MIT even became emotionally involved with the computer and shared their intimate thoughts with it! Concerned by the ethical implications of such a response, Weizenbaum wrote the book Computer Power and Human Reason (1976), which is now considered a classic in computer ethics.

A “New” Branch of Applied Ethics

In 1976, while teaching a medical ethics course, Walter Maner noticed that, often, when computers are involved in medical ethics cases, new ethically important considerations arise. Further examination of this phenomenon convinced Maner that there is need for a separate branch of applied ethics, which he dubbed “computer ethics.” (Wiener had not used this term, nor was it in common use before Maner.) Maner defined computer ethics as that branch of applied ethics which studies ethical problems “aggravated, transformed or created by computer technology.” He developed a university course, traveled around America giving speeches and conducting workshops at conferences, and published A Starter Kit for Teaching Computer Ethics. By the early 1980s, the name “computer ethics” had caught on, and other scholars began to develop this “new” field of applied ethics.

Among those whom Maner inspired in 1978 was a workshop attendee, Terrell Ward Bynum (the present author). In 1979, Bynum developed curriculum materials and a university course, and in the early 1980s gave speeches and ran workshops at a variety of conferences across America. In 1983, as Editor of the journal Metaphilosophy, he launched an essay competition to generate interest in computer ethics and to create a special issue of the journal. In 1985, that special issue – entitled Computers and Ethics – was published; and it quickly became the widest-selling issue in the journal’s history. The lead article – and winner of the essay competition – was James Moor’s now-classic essay, “What Is Computer Ethics?.” where he described computer ethics like this:

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology. (p. 266)

In Moor’s view computer ethics includes, (1) identification of computer-generated policy vacuums, (2) clarification of conceptual muddles, (3) formulation of policies for the use of computer technology, and (4) ethical justification of such policies.

A Standard-setting Textbook

1985 was a watershed year for computer ethics, not only because of the special issue of Metaphilosophy and Moor’s classic article, but also because Deborah Johnson published the first major textbook in the field (Computer Ethics), as well as an edited collection of readings with John Snapper (Ethical Issues in the Use of Computers). Johnson’s book Computer Ethics rapidly established itself as the standard-setting textbook in university courses, and it set the research agenda in computer ethics for nearly a decade.

In her book, Johnson defined computer ethics as a field which examines ways that computers “pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems, and forcing us to apply ordinary moral norms in uncharted realms.” (p. 1) Unlike Maner (see Maner 1996), with whom she had discussed computer ethics in the late 1970s, Johnson did not think that computers created wholly new ethical problems, but rather gave a “new twist” to already familiar issues such as ownership, power, privacy and responsibility.

Exponential Growth

Since 1985, the field of computer ethics has grown exponentially. New university courses, research centers, conferences, articles and textbooks have appeared, and a wide diversity of additional scholars and topics have become involved. For example, thinkers like Donald Gotterbarn, Keith Miller, Simon Rogerson, and Dianne Martin – as well as organizations like Computer Professionals for Social Responsibility, the Electronic Frontier Foundation and ACM-SIGCAS – have spearheaded developments relevant to computing and professional responsibility. Developments in Europe and Australia have been especially noteworthy, including new research centers in England, Poland, Holland, and Italy; the ETHICOMP series of conferences led by Simon Rogerson and the present writer; the CEPE conferences founded by Jeroen van den Hoven; and the Australian Institute of Computer Ethics headed by John Weckert and Chris Simpson.

The Future of Computer Ethics?

Given the explosive growth of computer ethics during the past two decades, the field appears to have a very robust and significant future. How can it be, then, that two important thinkers – Krystyna Górniak-Kocikowska and Deborah Johnson – have recently argued that computer ethics will disappear as a branch of applied ethics?

The Górniak Hypothesis – In her 1995 ETHICOMP paper, Górniak predicted that computer ethics, which is currently considered just a branch of applied ethics, will eventually evolve into something much more. It will evolve into a system of global ethics applicable in every culture on earth:

Just as the major ethical theories of Bentham and Kant were developed in response to the printing press revolution, so a new ethical theory is likely to emerge from computer ethics in response to the computer revolution. The newly emerging field of information ethics, therefore, is much more important than even its founders and advocates believe. (p. 177)

The very nature of the Computer Revolution indicates that the ethic of the future will have a global character. It will be global in a spatial sense, since it will encompass the entire Globe. It will also be global in the sense that it will address the totality of human actions and relations. (p.179)

Computers do not know borders. Computer networks… have a truly global character. Hence, when we are talking about computer ethics, we are talking about the emerging global ethic. (p. 186)

…the rules of computer ethics, no matter how well thought through, will be ineffective unless respected by the vast majority of or maybe even all computer users. This means that in the future, the rules of computer ethics should be respected by the majority (or all) of the human inhabitants of the Earth…. In other words, computer ethics will become universal, it will be a global ethic. (p.187)

According to the Górniak hypothesis, “local” ethical theories like Europe’s Benthamite and Kantian systems and the ethical systems of other cultures in Asia, Africa, the Pacific Islands, etc., will eventually be superseded by a global ethics evolving from today’s computer ethics. “Computer” ethics, then, will become the “ordinary” ethics of the information age.

The Johnson Hypothesis – In her 1999 ETHICOMP paper, Deborah Johnson expressed a view which, upon first sight, may seem to be the same as Górniak’s:

I offer you a picture of computer ethics in which computer ethics as such disappears…. We will be able to say both that computer ethics has become ordinary ethics and that ordinary ethics has become computer ethics. (Pp. 17 – 18)

But a closer look at the Johnson hypothesis reveals that it is very different from Górniak’s. On Górniak’s view, the computer revolution will eventually lead to a new ethical system, global and cross-cultural in nature. The new “ethics for the information age,” according to Górniak, will supplant parochial theories like Bentham’s and Kant’s – theories based on relatively isolated cultures in Europe, Asia, Africa, and other “local” regions of the globe.

Johnson’s hypothesis, in reality, is essentially the opposite of Górniak’s. It is another way of stating Johnson’s often-defended view that computer ethics concerns “new species of generic moral problems.” It assumes that computer ethics, rather than replacing theories like Bentham’s and Kant’s, will continue to presuppose them. Current ethical theories and principles, according to Johnson, will remain the bedrock foundation of ethical thinking and analysis, and the computer revolution will not lead to a revolution in ethics.

At the dawn of the 21st century, then, computer ethics thinkers have offered the world two very different views of the likely ethical relevance of computer technology. The Wiener-Maner-Górniak point of view sees computer technology as ethically revolutionary, requiring human beings to reexamine the foundations of ethics and the very definition of a human life. The more conservative Johnson perspective is that fundamental ethical theories will remain unaffected – that computer ethics issues are simply the same old ethics questions with a new twist – and consequently computer ethics as a distinct branch of applied philosophy will ultimately disappear.

References

  • Terrell Ward Bynum, ed. (1985), Computers and Ethics, Basil Blackwell (published as the October 1985 issue of Metaphilosophy).
  • Terrell Ward Bynum (1999), “The Foundation of Computer Ethics,” a keynote address at the AICEC99 Conference, Melbourne, Australia, July 1999.
  • Krystyna Górniak-Kocikowska (1996), “The Computer Revolution and the Problem of Global Ethics” in Terrell Ward Bynum and Simon Rogerson, eds., Global Information Ethics, Opragen Publications, 1996, pp. 177 – 190, (the April 1996 issue of Science and Engineering Ethics)
  • Deborah G. Johnson (1985), Computer Ethics, Prentice-Hall. (Second Edition 1994).
  • Deborah G. Johnson (1999), “Computer Ethics in the 21st Century,” a keynote address at ETHICOMP99, Rome, Italy, October 1999.
  • Deborah G. Johnson and John W. Snapper, eds. (1985), Ethical Issues in the Use of Computers, Wadsworth.
  • Walter Maner (1978), Starter Kit on Teaching Computer Ethics (Self published in 1978. Republished in 1980 by Helvetia Press in cooperation with the National Information and Resource Center for Teaching Philosophy).
  • Maner, Walter (1996), “Unique Ethical Problems in Information Technology,” in Terrell Ward Bynum and Simon Rogerson, eds., Global Information Ethics, Opragen Publications, 1996, pp. 137 – 52, (the April 1996 issue of Science and Engineering Ethics).
  • James H. Moor (1985), “What Is Computer Ethics?” in Terrell Ward Bynum, ed. (1985), Computers and Ethics, Basil Blackwell, pp. 266 – 275.
  • Joseph Weizenbaum (1976), Computer Power and Human Reason: From Judgment to Calculation, Freeman.
  • Norbert Wiener (1948), Cybernetics: or Control and Communication in the Animal and the Machine, Technology Press.
  • Norbert Wiener (1950/1954), The Human Use of Human Beings: Cybernetics and Society, Houghton Mifflin, 1950. (Second Edition Revised, Doubleday Anchor, 1954. This later edition is better and more complete from a computer ethics point of view.)

Norbert Wiener’s Foundation of Computer Ethics

In the late 1940s and early 1950s, visionary mathematician/philosopher Norbert Wiener founded computer ethics as a field of academic research. In his groundbreaking book, The Human Use of Human Beings (1950, 1954), Wiener developed a powerful method for identifying and analyzing the enormous impacts of information and communication technology (ICT) upon human values like life, health, happiness, security, knowledge and creativity. Even today, in this era of “global information ethics” and the Internet, concepts and procedures that Wiener developed in the 1950s can be used to identify, analyze and resolve social and ethical problems associated with ICT of all kinds. Wiener based his foundation for computer ethics upon a “cybernetic” view of human nature that leads readily to an ethically suggestive account of the purpose of a human life. From this, he derived “principles of justice” upon which every society should be based, and then he followed a practical strategy for identifying and resolving computer ethics issues wherever they might arise.

Wiener’s cybernetic view of human nature emphasized the physical structure of the human body and the tremendous potential for learning and creative action that human physiology makes possible. To underscore this fact, he often compared human physiology with that of less intelligent creatures like insects:

Cybernetics takes the view that the structure of the machine or of the organism is an index of the performance that may be expected from it. The fact that the mechanical rigidity of the insect is such as to limit its intelligence while the mechanical fluidity of the human being provides for his almost indefinite intellectual expansion is highly relevant to the point of view of this book… man’s advantage over the rest of nature is that he has the physiological and hence the intellectual equipment to adapt himself to radical changes in his environment. The human species is strong only insofar as it takes advantage of the innate, adaptive, learning faculties that its physiological structure makes possible. (Wiener 1954, pp. 57-58, italics in the original [see endnote*])

On the basis of his “cybernetic” analysis of human nature, Wiener concluded that the purpose of a human life is to flourish as the kind of information-processing being that humans naturally are:

I wish to show that the human individual, capable of vast learning and study, which may occupy almost half of his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium – and are indeed the key to man’s most noble flights – because variety and possibility belong to the very structure of the human organism. (Wiener 1954, pp. 51)

A good human life, according to Wiener, is one in which “great human values” are realized – one in which the creative and flexible information-processing potential of “the human sensorium” enables humans to reach their full promise in variety and possibility of action. Different people, of course, have various levels of talent and possibility, so one person’s achievements will differ from another’s. It is possible to lead a good human life in an indefinitely large number of ways: as a public servant or statesman, a teacher or scholar, a scientist or engineer, a musician, an artist, a tradesman, an artisan, and so on.

Wiener’s view of the purpose of a human life leads him to adopt what he calls “great principles of justice” upon which a society should be built – principles that, in his view, would maximize a person’s ability to flourish through variety and flexibility in human action. To highlight Wiener’s “great principles of justice”, let us call them “The Principle of Freedom”, “The Principle of Equality” and “The Principle of Benevolence”. (Wiener himself does not assign names but merely states them.) Using Wiener’s own definitions for these key ethical principles, we get the following list (1954, pp. 105-106):

The Principle of Freedom – Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”

The Principle of Equality – Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.”

The Principle of Benevolence – Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”

Wiener’s cybernetic account of human nature leads to the view that people are fundamentally social beings who can reach their full potential only by actively participating in communities of similar beings. Society, therefore, is essential to a good human life. But society can be despotic and oppressive, and thereby limit, or even stifle, freedom; so Wiener introduced a principle to limit, as much as possible, society’s negative impact upon freedom. (Let us name it “The Principle of Minimum Infringement of Freedom. ”)

The Principle of Minimum Infringement of Freedom – “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom”. (1954, p.106)

If one accepts Wiener’s account of human nature and the good society, it follows that many different cultures, with a wide diversity of customs, religions, languages and practices, can provide an appropriate context for human fulfillment and a good life. Indeed, given Wiener’s view that “variety and possibility belong to the very structure of the human organism”, he presumably would expect and encourage the existence of a broad diversity of cultures in the world to maximize the possibilities for choice and creative action. The primary restriction that Wiener would impose on any society would be that it should provide the kind of context in which humans can realize their full potential as sophisticated information-processing agents; and he believed this to be possible only where significant freedom, equality and human compassion hold sway.

So-called “ethical relativists” often point to the wide diversity of cultures in the world – with various religions, laws, codes, values and practices – as evidence that there is no “global ethics”, no underlying universal ethical foundation. Wiener, on the other hand, has a powerful and creative response to such skeptics. His account of human nature and the purpose of a human life can embrace and welcome the rich diversity of cultures and practices that relativists are fond of citing. At the same time, though, Wiener can advocate an underlying ethical foundation for all societies and cultures.

Wiener’s suggested methodology for analyzing and solving computer ethics questions is one that, essentially, assimilates new ethical judgments and new cases into the existing cluster of laws, rules, practices and principles that govern human behavior in the society in question. The key elements of this approach are the following:

Human Purpose – Ethical judgments and practices must be grounded in the overall purpose of a human life: a society and the rules which govern its members must make it possible for people to flourish – to reach their full potential in variety and possibility of action.

Principles of Justice – The Principle of Freedom, the Principle of Equality and the Principle of Benevolence should guide and inform every person’s judgments and practices; and society must neither permit nor impose unnecessary limitations upon individual freedom.

Clarity of Concepts and Rules – The meanings of ethical concepts and rules, in a given situation, should be clear and unambiguous. If they are not, one must undertake to clarify their meanings to the extent possible.

Precedent and Tradition – New ethical judgments and cases should be assimilated, where possible, into the existing body of cases, rules, laws, policies and practices.

For any given society, there will be a “cluster” of existing laws, rules, principles and practices to govern human behavior within that society. These form a complex and extremely rich set of overlapping, crisscrossing policies that constitute a “received policy cluster” (see Bynum and Schubert 1997). This received cluster of policies should be the starting point for developing an answer to any computer ethics question.

If a given case or question does not fit easily into the existing set of rules and policies in one’s society, then one must either (1) make adjustments in the old policies and rules to accommodate the new case, or else (2) introduce a totally new policy to cover the new kind of case. Presumably, if such a new case were to arise, one would have to use the overall purpose of a human life, together with the fundamental principles of justice, to create and justify new laws and policies consistent with the old ones. Such a case would be an example of James Moor’s classic “policy vacuum” for which one must formulate and justify new policies. (See Moor 1985.)

Given these elements of ethical analysis, Wiener’s methodology can be construed as including the following five steps:

Step One: Identify an ethical question or case regarding the integration of ICT into society.

Step Two: Clarify any ambiguous concepts or rules that may apply to the case in question.

Step Three: If possible, apply existing policies (principles, laws, rules, practices) that govern human behavior in the given society. Use precedent and traditional interpretation in such a way as to assimilate the new case or policy into the existing set of social policies and practices.

Step Four: If precedent and existing traditions are insufficient to settle the question or deal with the case, revise the old policies or create new ones, using “the great principles of justice” and the purpose of a human life to guide the effort.

Step Five: Answer the question or deal with the case using the revised or enriched policies.

It is important to note that this method of doing of computer ethics need not involve the expertise of a trained philosopher. In any just society, a successfully functioning adult will be familiar with the laws, rules, customs, and practices that normally govern one’s behavior in that society and enable one to tell whether a proposed action or policy would be considered ethical. Thus, all those in society who must cope with the introduction of ICT – whether they be public policy makers, ICT professionals, business people, workers, teachers, parents, or others – can and should engage in computer ethics by helping to integrate ICT ethically into society. Computer ethics, understood in this very broad way, is too vast and too important to be left only to academics or to ICT professionals.

Wiener makes it clear that, in his view, the integration of ICT into society will constitute the remaking of society – “the second industrial revolution” and “the automatic age”– destined to affect every walk of life. It is bound to be a multi-faceted, on-going process, which will take decades of effort and will radically change the world. In Wiener’s words, we are “here in the presence of another social potentiality of unheard-of importance for good and for evil.” (1948, p. 27) The defining goal of computer ethics, then, is to advance and facilitate the good consequences of ICT while preventing or minimizing the harmful ones.

Today, the ethical importance of the computer revolution – stressed by Norbert Wiener more than fifty years ago – has become obvious. The “information age” is emerging, and the metaphysical and scientific foundation for computer ethics that Wiener laid down decades ago can still provide effective tools and guidance as we confront a wide diversity of challenging new ethical issues.

Endnote
*Quotations from Wiener’s The Human Use of Human Beings are all from the 1954 Second Edition Revised.

References

  • Terrell Ward Bynum (1999), “The Foundation of Computer Ethics”, Keynote Address at AICEC99 (The Australian Institute of Computer Ethics Conference 1999), Melbourne, Australia, July 1999.
  • Terrell Ward Bynum and Petra Schubert (1997), “How to Do Computer Ethics – A Case Study: The Electronic Mall Bodensee” in Jeroen van den Hoven, ed., Computer Ethics: Philosophical Enquiry, Erasmus University Press, 1997, pp. 85-95. (Proceedings of CEPE 97)
  • James H. Moor (1985), “What Is Computer Ethics” in Terrell Ward Bynum, ed., Computers and Ethics, Blackwell, 1985, pp. 266-275. (Published as the October 1985 issue of Metaphilosophy.)
  • Norbert Wiener (1948), Cybernetics: or Control and Communication in the Animal and the Machine, John Wiley, 1948.
  • Norbert Wiener (1950, 1954), The Human Use of Human Beings, Houghton Mifflin, 1950. Second Edition Revised, Doubleday Anchor, 1954.

A Discipline in its Infancy

As Computer Use Grows, So Do Moral Issues

This article appeared in the Dallas Morning News, on Tuesday, January 12, 1982.

A famous rock star has just died and millions of fans are grieving. The computer of a major novelty distributor is immediately put into action, for there is not a moment to lose if the grief is to be fully exploited. From data banks of ticket agencies, record distributors and other firms, the computer compiles names, addresses, purchasing histories and financial backgrounds of people who bought records and attended concerts of the fallen star. Within 48 hours of the tragedy, the novelty company begins computer-dialing phone numbers of thousands of grieving fans. Whenever someone answers, the computer plays excerpts of the dead star’s most emotional records along with a sales pitch for souvenir T-shirts and posters. Instantly, orders are taken and confirmation letters are printed. Within a week, more than a million fans have been reached, and factories have been notified of the number of items to produce. Is this imagined application of computers a smart, efficient business venture? Is it unfair exploitation of people caught in a weak moment? Is the gathering of information on people and the phoning of their homes an unethical invasion of their privacy or a new and commendable business strategy? Such questions and many harder ones are being raised and debated in “computer ethics,” a new field of growing concern to business and industry as well as to all of society.

Computer Crime

One large area of problems is that of computer crime. Computers have been targets of attack – by guns, bombs, screwdrivers, magnets, even simple house keys – that have caused millions of dollars in damages. Computers have been used to embezzle fortunes; to print fraudulent coupons, tickets, deposit slips, bonds, insurance policies; to divert large quantities of merchandise; to establish phony credit ratings, job dossiers, credentials; to steal company secrets, software, even computer time itself. It is important to note, however, that the vast majority of crimes studied by computer ethics did not come into existence with computers. Embezzlement sabotage, fraud and similar misdeeds have existed for centuries, and new technology can always be misused as well as used correctly. In many cases, though, computers have tempted the would-be criminal with powers he could only dream of in the past. With the right passwords and computer know-how, for example, it is possible to rob a bank at home from your own telephone and make off with millions of dollars. No guns, no dangerous confrontations with guards, and the evidence can often be electronically erased without a trace. Even when the culprit is caught, convictions are hard to secure and penalties have usually been extremely light considering the staggering amounts of money or damage involved.

Another large area of concern in computer ethics is that of privacy. There are thousands of data banks in business, government, health care and education, containing all kinds of information on millions of Americans. How much of this information should a business be permitted to compile on its employees? – its potential customers? – its rivals in business? How much of the information stored in the computer of a single company should be readily available to low level employees, middle management and top executives? Surely, a secretary using a computer terminal should not be able to gain information on her boss’ health problems, financial status, promotion prospects. But how much information should the boss be able to compile on the secretary? Should he or she have access to IQ scores, personality profile tests, reports from the company physician and psychologist? Should the company acquire credit card or bank records on its employees and be able to determine where they shop, what doctors they are seeing, what motels they stay in, how they spend their leisure time? Should a company sell names, addresses, phone numbers, salary figures and other information on its employees to other firms to be used for a sales campaign or market analysis? Some answers to questions such as these seem obvious, but others are complicated and debatable. Much work needs to be done in considering the issues. In the mid 1960’s, a major public uproar resulted from a proposal before Congress to create a central bank of information on all Americans. The National Data Center, as it was to be called, would assign an identifying number to each American and then establish a dossier of information from the IRS, Census Bureau, National Center for Health Statistics, Bureau of Labor Statistics, Office of Education and scores of other government data banks. Many people saw this as a menacing first step toward a “Big Brother” government that would meddle in the private lives and business affairs of everyone. Even if government data banks were ignored, however, business and industry taken as a whole have sufficient data on most Americans to enable an information thief to compile very full dossiers on his victims. Imagine that an unscrupulous person gains access to the files of doctors, psychiatrists, banks, credit bureaus, personnel offices, lawyers, accountants, insurance companies and so on. The result could be powerful tools for harassment, blackmail, repression, political control, to name but a few possibilities. In a recent conversation, however, Harold Fleisher of International Business Machines Corp.’s data systems division noted that great fears about data security and limited access may be a bit premature. As computer systems have become more complicated, he said, the “interface technology” needed to make them easily useable has led to increasingly effective safeguards, making it harder and harder for unauthorized persons to get information to which they are not entitled. Indeed, according to Walter Maner of the Institute of Applied Ethics at Old Dominion University, our ability to use “secret codes” to store and transfer information is becoming so effective that we may include the question of who has the right to conceal the truth and under what circumstances. Does anyone ever have the right to bury the truth so effectively that it can never be known again?

Decision Making

A third important area of computer ethics concerns responsibility and decision-making. Sometimes a business may try to excuse a billing mistake or other problems by calling it a “computer error.” But these days it is highly unlikely that sophisticated computer hardware will malfunction without catching the mistake. So-called “computer errors” are caused, in most cases, by a person who has entered the wrong information or pushed the wrong button or written a flawed program. In a complex computer system involving many people with a variety of roles, programs, data banks, input terminals, processors and so on, it can become a major problem – both practical and theoretical – to decide who is responsible or liable for the proper functioning of the system. Such problems become even more complicated when computers begin to make decisions that previously were made by people. More and more business management decisions are being automated by computer – when to order more supplies, which ones to order, when to mail overdue notices or cut off electricity or deny someone credit, and so on. If an elderly couple freezes to death in their home because an electric company’s computer has issued a cutoff order, who is responsible?

Computer crime, privacy and responsibility are only three broad areas of computer ethics. There are many others. Deborah Johnson, of the Center for Study of the Human Dimension of Science technology at Rensselaer Polytechnic Institute, is writing a book on computer ethics that also includes discussion of the ownership and copyrighting of ideas, the impact of computers on human autonomy, and a variety of additional issues. In his course “The Ethical and Social Impact of Computing” at Dartmouth, Stephen Garlan, chairman of the program in computer and information science, includes such topics as the impact of computers on organizations and work patterns, and government related issues like national security and the concentration of political power. At the Center for the Study of Ethics in the Professions at Illinois Institute of technology, John Snappers’ course, “Moral Issues in Computer Science,” deals as well with ethical codes of computer societies. These courses and centers, and a handful of others across the nation, represent the early stages of a discipline that will grow rapidly in the next few years.