Teaching Computer Ethics

Teaching Computer Ethics table of contents

Introduction

– Edited by Terrell Ward Bynum, Walter Maner and John L. Fodor

The National Conference on Computing and Values (NCCV) was held on the campus of Southern Connecticut State University in August 1991. The Conference included six “tracks”: Teaching Computing and Human Values, Computer Privacy and Confidentiality, Computer Security and Crime, Ownership of Software and Intellectual Property, Equity and Access to Computing Resources, and Policy Issues in the Campus Computing Environment. Each track included a major address, three to five commentaries, some small “working groups,” and a packet of relevant readings (the “Track Pack”).

This monograph contains the proceeding of the “Teaching Computing and Human Values” track of NCCV. It includes three background readings, the “track address” with four commentaries, the conference bibliography, and a report on the activities and findings of the small working groups on teaching computer ethics.

The background readings are: “What Is Computer Ethics?” by James H. Moor, “Integrating Computer Ethics into the Computer Science Curriculum” by Keith Miller, and “A ‘Capstone’ Course in Computer Ethics” by Donald Gotterbarn.

The track address is “Computer Ethics in the Computer Science Curriculum” by Terrell Ward Bynum; and the commentaries include: “Non-Apologetic Computer Science Education” by C. Dianne Martin and Hilary J. Holz, “Realities of Teaching Social and Ethical Issues in Computing” by Doris Keefe Lidtke, “The Use and Abuse of Computer Ethics” by Donald Gotterbarn, and “Courting Culture in Computer Science” by Batya Friedman.

Keith Miller was the “Track Coordinator” for this track, and the Appendix of this monograph is his report on the activities and findings of the small working groups of the track.

The National Conference on Computing and Values was a major undertaking that required significant help from many people. The Editors would like to express sincere thanks to the National Science Foundation and the Metaphilosophy Foundation for support that made the project possible. And we wish to thank the following people for their invaluable help and support: (in alphabetic order) Denice Botto, William Bowersox, Aline W. Bynum, Robert Corda, Donald Duman, Richard Fabish, James Fullmer, Ken W. Gatzke, Steven J. Gold, Edward Hoffman, Rodney Lane, Sheila Magnotti, Armen Marsoobian, John Mattia, P. Krishna Mohan, Beryl Normand, Robert O’Brien, Daniel Ort, Anthony Pinciaro, Amy Rubin, Brian Russer, Elizabeth L.B. Sabatino, Charlene Senical, J. Philip Smith, Ray Sparks, Larry Tortice, Suzanne Tucker.

What is Computer Ethics?

A Proposed Definition

Computers are special technology and they raise some special ethical issues. In this essay I will discuss what makes computers different from other technology and how this difference makes a difference in ethical considerations. In particular, I want to characterize computer ethics and show why this emerging field is both intellectually interesting and enormously important.

On my view, computer ethics is the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. I use the phrase “computer technology” because I take the subject matter of the field broadly to include computers and associated technology. For instance, I include concerns about software as well as hardware and concerns about networks connecting computers as well as computers themselves.

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology.

Now it may seem that all that needs to be done is the mechanical application of an ethical theory to generate the appropriate policy. But this is usually not possible. A difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis which provides a coherent conceptual framework within which to formulate a policy for action. Indeed, much of the important work in computer ethics is devoted to proposing conceptual frameworks for understanding ethical problems involving computer technology.

An example may help to clarify the kind of conceptual work that is required. Let’s suppose we are trying to formulate a policy for protecting computer programs. Initially, the idea may seem clear enough. We are looking for a policy for protecting a kind of intellectual property. But then a number of questions which do not have obvious answers emerge. What is a computer program? Is it really intellectual property which can be owned or is it more like an idea, an algorithm, which is not owned by anybody? If a computer program is intellectual property, is it an expression of an idea that is owned (traditionally protectable by copyright) or is it a process that is owned (traditionally protectable by patent)? Is a machine-readable program a copy of a human-readable program? Clearly, we need a conceptualization of the nature of a computer program in order to answer these kinds of questions. Moreover, these questions must be answered in order to formulate a useful policy for protecting computer programs. Notice that the conceptualization we pick will not only affect how a policy will be applied but to a certain extent what the facts are. For instance, in this case the conceptualization will determine when programs count as instances of the same program.

Even within a coherent conceptual framework, the formulation of a policy for using computer technology can be difficult. As we consider different policies we discover something about what we value and what we don’t. Because computer technology provides us with new possibilities for acting, new values emerge. For example, creating software has value in our culture which it didn’t have a few decades ago. And old values have to be reconsidered. For instance, assuming software is intellectual property, why should intellectual property be protected? In general, the consideration of alternative policies forces us to discover and make explicit what our value preferences are.

The mark of a basic problem in computer ethics is one in which computer technology is essentially involved and there is an uncertainty about what to do and even about how to understand the situation. Hence, not all ethical situations involving computers are central to computer ethics. If a burglar steals available office equipment including computers, then the burglar has done something legally and ethically wrong. But this is really an issue for general law and ethics. Computers are only accidentally involved in this situation, and there is no policy or conceptual vacuum to fill. The situation and the applicable policy are clear.

In one sense I am arguing for the special status of computer ethics as a field of study. Applied ethics is not simply ethics applied. But, I also wish to stress the underlying importance of general ethics and science to computer ethics. Ethical theory provides categories and procedures for determining what is ethically relevant. For example, what kinds of things are good? What are our basic rights? What is an impartial point of view? These considerations are essential in comparing and justifying policies for ethical conduct. Similarly, scientific information is crucial in ethical evaluations. It is amazing how many times ethical disputes turn not on disagreements about values but on disagreements about facts.

On my view, computer ethics is a dynamic and complex field of study which considers the relationships among facts, conceptualizations, policies and values with regard to constantly changing computer technology. Computer ethics is not a fixed set of rules which one shellacs and hangs on the wall. Nor is computer ethics the rote application of ethical principles to a value-free technology. Computer ethics requires us to think anew about the nature of computer technology and our values. Although computer ethics is a field between science and ethics and depends on them, it is also a discipline in its own right which provides both conceptualizations for understanding and policies for using computer technology.

Though I have indicated some of the intellectually interesting features of computer ethics, I have not said much about the problems of the field or about its practical importance. The only example I have used so far is the issue of protecting computer programs which may seem to be a very narrow concern. In fact, I believe the domain of computer ethics is quite large and extends to issues which affect all of us. Now I want to turn to a consideration of these issues and argue for the practical importance of computer ethics. I will proceed not by giving a list of problems but rather by analyzing the conditions and forces which generate ethical issues about computer technology. In particular, I want to analyze what is special about computers, what social impact computers will have, and what is operationally suspect about computing technology. I hope to show something of the nature of computer ethics by doing some computer ethics.

The Revolutionary Machine

What is special about computers? It is often said that a Computer Revolution is taking place, but what is it about computers that makes them revolutionary? One difficulty in assessing the revolutionary nature of computers is that the word “revolutionary” has been devalued. Even minor technological improvements are heralded as revolutionary. A manufacturer of a new dripless pouring spout may well promote it as revolutionary. If minor technological improvements are revolutionary, then undoubtedly ever-changing computer technology is revolutionary. The interesting issue, of course, is whether there is some nontrivial sense in which computers are revolutionary. What makes computer technology importantly different from other technology? Is there any real basis for comparing the Computer Revolution with the Industrial Revolution?

If we look around for features that make computers revolutionary, several features suggest themselves. For example, in our society computers are affordable and abundant. It is not much of an exaggeration to say that currently in our society every major business, factory, school, bank, and hospital is rushing to utilize computer technology. Millions of personal computers are being sold for home use. Moreover, computers are integral parts of products which don’t look much like computers such as watches and automobiles. Computers are abundant and inexpensive, but so are pencils. Mere abundance and affordability don’t seem sufficient to justify any claim to technological revolution.

One might claim the newness of computers makes them revolutionary. Such a thesis requires qualification. Electronic digital computers have been around for forty years. In fact, if the abacus counts as a computer, then computer technology is among the oldest technologies. A better way to state this claim is that recent engineering advances in computers make them revolutionary. Obviously, computers have been immensely improved over the last forty years. Along with dramatic increases in computer speed and memory there have been dramatic decreases in computer size. Computer manufacturers are quick to point out that desk top computers today exceed the engineering specifications of computers which filled rooms only a few decades ago. There has been also a determined effort by companies to make computer hardware and computer software easier to use. Computers may not be completely user friendly but at least they are much less unfriendly. However, as important as these features are, they don’t seem to get to the heart of the Computer Revolution. Small, fast, powerful and easy-to-use electric can openers are great improvements over earlier can openers, but they aren’t in the relevant sense revolutionary.

Of course, it is important that computers are abundant, less expensive, smaller, faster, and more powerful and friendly. But, these features serve as enabling conditions for the spread of the Computer Revolution. The essence of the Computer Revolution is found in the nature of a computer itself. What is revolutionary about computers is logical malleability. Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs, and connecting logical operations. Logical operations are the precisely defined steps which take a computer from one state to the next. The logic of computers can be massaged and shaped in endless ways through changes in hardware and software. Just as the power of a steam engine was a raw resource of the Industrial Revolution so the logic of a computer is a raw resource of the Computer Revolution. Because logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity. The driving question of the Computer Revolution is “how can we mold the logic of computers to better serve our purposes?”

I think logical malleability explains the already widespread application of computers and hints at the enormous impact computers are destined to have. Understanding the logical malleability of computers is essential to understanding the power of the developing technological revolution. Understanding logical malleability is also important in setting policies for the use of computers. Other ways of conceiving computers serve less well as a basis for formulating and justifying policies for action.

Consider an alternative and popular conception of computers in which computers are understood as number crunchers, i.e., essentially as numerical devices. On this conception computers are nothing but big calculators. It might be maintained on this view that mathematical and scientific applications should take precedence over non-numerical applications such as word processing. My position, on the contrary, is that computers are logically malleable. The arithmetic interpretation is certainly a correct one, but it is only one among many interpretations. Logical malleability has both a syntactic and a semantic dimension. Syntactically, the logic of computers is malleable in terms of the number and variety of possible states and operations. Semantically, the logic of computers is malleable in that the states of the computer can be taken to represent anything. Computers manipulate symbols but they don’t care what the symbols represent. Thus, there is no ontological basis for giving preference to numerical applications over non-numerical applications.

The fact that computers can be described in mathematical language, even at a very low level, doesn’t make them essentially numerical. For example, machine language is conveniently and traditionally expressed in 0’s and 1’s. But the 0’s and 1’s simply designate different physical states. We could label these states as “on” and “off” or “yin” and “yang” and apply binary logic. Obviously, at some levels it is useful to use mathematical notation to describe computer operations, and it is reasonable to use it. The mistake is to reify the mathematical notation as the essence of a computer and then use this conception to make judgments about the appropriate use of computers.

In general, our conceptions of computer technology will affect our policies for using it. I believe the importance of properly conceiving the nature and impact of computer technology will increase as the Computer Revolution unfolds.

Anatomy of the Computer Revolution

Because the Computer Revolution is in progress, it is difficult to get a perspective on its development. By looking at the Industrial Revolution I believe we can get some insight into the nature of a technological revolution. Roughly, the Industrial Revolution in England occurred in two major stages. The first stage was the technological introduction stage which took place during the last half of the Eighteenth Century. During this stage inventions and processes were introduced, tested, and improved. There was an industrialization of limited segments of the economy, particularly in agriculture and textiles. The second stage was the technological permeation stage which took place during the Nineteenth Century. As factory work increased and the populations of cities swelled, not only did well known social evils emerge, but equally significantly corresponding changes in human activities and institutions, ranging from labor unions to health services, occurred. The forces of industrialization dramatically transformed the society.

My conjecture is that the Computer Revolution will follow a similar two stage development. The first stage, the introduction stage, has been occurring during the last forty years. Electronic computers have been created and refined. We are gradually entering the second stage, the permeation stage, in which computer technology will become an integral part of institutions throughout our society. I think that in the coming decades many human activities and social institutions will be transformed by computer technology and that this transforming effect of computerization will raise a wide range of issues for computer ethics.

What I mean by “transformed” is that the basic nature or purpose of an activity or institution is changed. This is marked by the kinds of questions that are asked. During the introduction stage computers are understood as tools for doing standard jobs. A typical question for this stage is “how well does a computer do such and such an activity?” Later, during the permeation stage, computers become an integral part of the activity. A typical question for this stage is “What is the nature and value of such and such an activity?” In our society there is already some evidence of the transforming effect of computerization as marked by the kind of questions being asked.

For example, for years computers have been used to count votes. Now the election process is becoming highly computerized. Computers can be used to count votes and to make projections about the outcome. Television networks use computers both to determine quickly who is winning and to display the results in a technologically impressive manner. During the last presidential election in the United States [1984] the television networks projected the results not only before the polls in California were closed but also before the polls in New York were closed. In fact, voting was still going on in over half the states when the winner was announced. The question is no longer “how efficiently do computers count votes in a fair election?” but “what is a fair election?” Is it appropriate that some people know the outcome before they vote? The problem is that computers not only tabulate the votes for each candidate but likely influence the number and distribution of these votes. For better or worse, our electoral process is being transformed.

As computers permeate more and more of our society, I think we will see more and more of the transforming effect of computers on our basic institutions and practices. Nobody can know for sure how our computerized society will look fifty years from now, but it is reasonable to think that various aspects of our daily work will be transformed. Computers have been used for years by businesses to expedite routine work, such as calculating payrolls; but as personal computers become widespread and allow executives to work at home, and as robots do more and more factory work, the emerging question will be not merely “how well do computers help us work?” but “what is the nature of this work?”

Traditional work may no longer be defined as something that normally happens at a specific time or a specific place. Work for us may become less doing a job than instructing a computer to do a job. As the concept of work begins to change, the values associated with the old concept will have to be reexamined. Executives who work at a computer terminal at home will lose some spontaneous interaction with colleagues. Factory workers who direct robots by pressing buttons may take less pride in a finished product. And similar effects can be expected in other types of work. Commercial pilots who watch computers fly their planes may find their jobs to be different from what they expected.

A further example of the transforming effect of computer technology is found in financial institutions. As the transfer and storage of funds becomes increasingly computerized the question will be not merely “how well do computers count money?” but “what is money?” For instance, in a cashless society in which debits are made to one’s account electronically at the point of sale, has money disappeared in favor of computer records or have electronic impulses become money? What opportunities and values are lost or gained when money becomes intangible?

Still another likely area for the transforming effect of computers is education. Currently, educational packages for computers are rather limited. Now it is quite proper to ask “how well do computers educate?” But as teachers and students exchange more and more information indirectly via computer networks and as computers take over more routine instructional activities, the question will inevitably switch to “what is education?” The values associated with the traditional way of educating will be challenged. How much human contact is necessary or desirable for learning? What is education when computers do the teaching?

The point of this futuristic discussion is to suggest the likely impact of computer technology. Though I don’t know what the details will be, I believe the kind of transformation I am suggesting is likely to occur. This is all I need to support my argument for the practical importance of computer ethics. In brief, the argument is as follows: The revolutionary feature of computers is their logical malleability. Logical malleability assures the enormous application of computer technology. This will bring about the Computer Revolution. During the Computer Revolution many of our human activities and social institutions will be transformed. These transformations will leave us with policy and conceptual vacuums about how to use computer technology. Such policy and conceptual vacuums are the marks of basic problems within computer ethics. Therefore, computer ethics is a field of substantial practical importance.

I find this argument for the practical value of computer ethics convincing. I think it shows that computer ethics is likely to have increasing application in our society. This argument does rest on a vision of the Computer Revolution which not everyone may share. Therefore, I will turn to another argument for the practical importance of computer ethics which doesn’t depend upon any particular view of the Computer Revolution. This argument rests on the invisibility factor and suggests a number of ethical issues confronting computer ethics now.

The Invisibility Factor

There is an important fact about computers. Most of the time and under most conditions computer operations are invisible. One may be quite knowledgeable about the inputs and outputs of a computer and only dimly aware of the internal processing. This invisibility factor often generates policy vacuums about how to use computer technology. Here I will mention three kinds of invisibility which can have ethical significance.

The most obvious kind of invisibility which has ethical significance is invisible abuse. Invisible abuse is the intentional use of the invisible operations of a computer to engage in unethical conduct. A classic example of this is the case of a programmer who realized he could steal excess interest from a bank. When interest on a bank account is calculated, there is often a fraction of a cent left over after rounding off. This programmer instructed a computer to deposit these fractions of a cent to his own account. Although this is an ordinary case of stealing, it is relevant to computer ethics in that computer technology is essentially involved and there is a question about what policy to institute in order to best detect and prevent such abuse. Without access to the program used for stealing the interest or to a sophisticated accounting program such an activity may easily go unnoticed.

Another possibility for invisible abuse is the invasion of the property and privacy of others. A computer can be programmed to contact another computer over phone lines and surreptitiously remove or alter confidential information. Sometimes an inexpensive computer and a telephone hookup is all it takes. A group of teenagers, who named themselves “the 414s” after the Milwaukee telephone exchange, used their home computers to invade a New York hospital, a California bank, and a government nuclear weapons laboratory. These break-ins were done as pranks, but obviously such invasions can be done with malice and be difficult or impossible to detect.

A particularly insidious example of invisible abuse is the use of computers for surveillance. For instance, a company’s central computer can monitor the work done on computer terminals far better and more discreetly than the most dedicated sweatshop manager. Also, computers can be programmed to monitor phone calls and electronic mail without giving any evidence of tampering. A Texas oil company, for example, was baffled why it was always outbid on leasing rights for Alaskan territory until it discovered another bidder was tapping its data transmission lines near its Alaskan computer terminal.

A second variety of the invisibility factor, which is more subtle and conceptually interesting than the first, is the presence of invisible programming values. Invisible programming values are those values which are embedded in a computer program.

Writing a computer program is like building a house. No matter how detailed the specifications may be, a builder must make numerous decisions about matters not specified in order to construct the house. Different houses are compatible with a given set of specifications. Similarly, a request for a computer program is made at a level of abstraction usually far removed from the details of the actual programming language. In order to implement a program which satisfies the specifications a programmer makes some value judgments about what is important and what is not. These values become embedded in the final product and may be invisible to someone who runs the program.

Consider, for example, computerized airline reservations. Many different programs could be written to produce a reservation service. American Airlines once promoted such a service called “SABRE.” This program had a bias for American Airline flights built in so that sometimes an American Airline flight was suggested by the computer even if it was not the best flight available. Indeed, Braniff Airlines, which went into bankruptcy for awhile, sued American Airlines on the grounds that this kind of bias in the reservation service contributed to its financial difficulties.

Although the general use of a biased reservation service is ethically suspicious, a programmer of such a service may or may not be engaged in invisible abuse. There may be a difference between how a programmer intends a program to be used and how it is actually used. Moreover, even if one sets out to create a program for a completely unbiased reservation service, some value judgments are latent in the program because some choices have to be made about how the program operates. Are airlines listed in alphabetical order? Is more than one listed at a time? Are flights just before the time requested listed? For what period after the time requested are flights listed? Some answers, at least implicitly, have to be given to these questions when the program is written. Whatever answers are chosen will build certain values into the program.

Sometimes invisible programming values are so invisible that even the programmers are unaware of them. Programs may have bugs or may be based on implicit assumptions which don’t become obvious until there is a crisis. For example, the operators of the ill-fated Three Mile Island nuclear power plant were trained on a computer which was programmed to simulate possible malfunctions including malfunctions which were dependent on other malfunctions. But, as the Kemeny Commission which investigated the disaster discovered, the simulator was not programmed to generate simultaneous, independent malfunctions. In the actual failure at Three Mile Island the operators were faced with exactly this situation – simultaneous, independent malfunctions. The inadequacy of the computer simulation was the result of a programming decision, as unconscious or implicit as that decision may have been. Shortly after the disaster the computer was reprogrammed to simulate situations like the one that did occur at Three Mile Island.

A third variety of the invisibility factor, which is perhaps the most disturbing, is invisible complex calculation. Computers today are capable of enormous calculations beyond human comprehension. Even if a program is understood, it does not follow that the calculations based on that program are understood. Computers today perform, and certainly supercomputers in the future will perform, calculations which are too complex for human inspection and understanding.

An interesting example of such complex calculation occurred in 1976 when a computer worked on the four color conjecture. The four color problem, a puzzle mathematicians have worked on for over a century is to show that a map can be colored with at most four colors so that no adjacent areas have the same color. Mathematicians at the University of Illinois broke the problem down into thousands of cases and programmed computers to consider them. After more than a thousand hours of computer time on various computers, the four color conjecture was proved correct. What is interesting about this mathematical proof, compared to traditional proofs, is that it is largely invisible. The general structure of the proof is known and found in the program and any particular part of the computer’s activity can be examined, but practically speaking the calculations are too enormous for humans to examine them all.

The issue is how much we should trust a computer’s invisible calculations. This becomes a significant ethical issue as the consequences grow in importance. For instance, computers are used by the military in making decisions about launching nuclear weapons. On the one hand, computers are fallible and there may not be time to confirm their assessment of the situation. On the other hand, making decisions about launching nuclear weapons without using computers may be even more fallible and more dangerous. What should be our policy about trusting invisible calculations?

A partial solution to the invisibility problem may lie with computers themselves. One of the strengths of computers is the ability to locate hidden information and display it. Computers can make the invisible visible. Information which is lost in a sea of data can be clearly revealed with the proper computer analysis. But, that’s the catch. We don’t always know when, where, and how to direct the computer’s attention.

The invisibility factor presents us with a dilemma. We are happy in one sense that the operations of a computer are invisible. We don’t want to inspect every computerized transaction or program every step for ourselves or watch every computer calculation. In terms of efficiency the invisibility factor is a blessing. But it is just this invisibility that makes us vulnerable. We are open to invisible abuse or invisible programming of inappropriate values or invisible miscalculation. The challenge for computer ethics is to formulate policies which will help us deal with this dilemma. We must decide when to trust computers and when not to trust them. This is another reason why computer ethics is so important.

Dartmouth College

Computer Ethics in the Computer Science Curriculum

Ideally, new technology will advance, enhance and support human values. But of course this is not an ideal world. The effects of technology are mixed. For example, the “agricultural revolution” and the “industrial revolution” brought many benefits to human beings: food for the hungry, effective medical care for the sick, relief from heavy labor, rapid and comfortable transportation, and so on. Nevertheless, problems were generated: overpopulation, world-threatening weapons, pollution, terrible accidents that killed many people, etc.

Too often, new technology develops with little attention to its impact upon human values. The mass production of automobiles, for example, had profound effects upon cities, travel, entertainment, nature, the environment, even sexual mores. Many of the consequences were unforeseen – even unimagined – by those who created the technology.

Let us do better! In particular, let us do what we can in this era of “the computer revolution” to see that computer technology advances human values.

True enough, we could argue endlessly over the meanings of terms like “privacy,” “health,” “security,” “fairness,” or “ownership.” Philosophers do it all the time – and ought to. But people understand such values well enough to desire and even to treasure them. We do not need absolute clarity or unattainable unanimity before we do anything to advance them.

Technology and Human Values

Ideally, new technology always advances, enhances and supports human values. But of course this is not an ideal world. The effects of technology are mixed. For example, the “agricultural revolution” and the “industrial revolution” brought many benefits to human beings: food for the hungry, effective medical care for the sick, relief from heavy labor, rapid and comfortable transportation, and so on. Nevertheless, problems were generated: overpopulation, world-threatening weapons, pollution, terrible accidents which killed many people, etc.

Too often, new technology develops with little attention to its impact upon human values. The mass production of automobiles, for example, had profound effects upon cities, travel, entertainment, nature, the environment, even sexual mores. Many of the consequences were unforeseen – even unimagined – by those who created the technology.

Let us do better! In particular, let us do what we can in this era of “the computer revolution” to see that computer technology advances human values.

True enough, we could argue endlessly over the meanings of terms like “privacy,” “health,” “security,” “fairness,” or “ownership.” Philosophers do it all the time – and ought to. But people understand such values well enough to desire and even to treasure them. We do not need absolute clarity or unattainable unanimity before we do anything to advance them.

What is Computer Ethics?

How can we work to make computing technology advance human values? One way is to teach “computer ethics” to the public at large and to our students enrolled in courses in computing and information sciences. But what is computer ethics?

The term “computer ethics” was coined in the mid 1970s by Walter Maner to refer to that field of applied professional ethics dealing with ethical problems aggravated, transformed or created by computer technology. By analogy with the more developed field of medical ethics, Maner focussed attention upon applications of ethical theories and decision procedures used by philosophers doing applied ethics. He distinguished “computer ethics” from sociology of computing and from technology assessment.

For nearly two decades, the term “computer ethics” kept this focussed meaning. Recently, however, the term “computer ethics” has acquired a broader sense that includes applied ethics, sociology of computing, technology assessment, computer law, and related fields. This broader kind of computer ethics examines the impact of computing and information technology upon human values, using concepts, theories and procedures from philosophy, sociology, law, psychology, and so on. Practitioners of the broader computer ethics – whether they are philosophers, computer scientists, social scientists, public policy makers, or whatever – all have the same goal:

To integrate computing technology and human values in such a way that the technology advances and protects human values, rather than doing damage to them.

Donn Parker pursues this goal by gathering example cases and presenting scenarios for discussion. Judith Perrolle does it by applying sociological theories and tools to data about computing; Sherry Turkle does it by applying psychological theories and tools; James Moor, Deborah Johnson and others do it by applying philosophical theories and tools; and so on.

All of these thinkers and many others address problems about computing technology and human values, seeking to

  1. Understand the impact of computing technology upon human values
  2. Minimize the damage that such technology can do to human values, and
  3. Identify ways to use computer technology to advance human values.

Three “Levels” of Computer Ethics

Computer ethics questions can be raised and studied at various “levels.” And each level is vital to the overall goal of protecting and advancing human values. On the most basic level, computer ethics tries to sensitize people to the fact that computer technology has social and ethical consequences.

This is the overall goal of what some call “pop” computer ethics. Newspapers, magazines and TV news programs have engaged increasingly in computer ethics of this sort. Every week, there are news stories about computer viruses, or software ownership law suits, or computer-aided bank robbery, or harmful computer malfunctions, or computerized weapons, etc. As the social impact of information technology grows, such articles will proliferate. That’s good! The public at large should be sensitized to the fact that computer technology can threaten human values as well as advance them.

The second “level” of computer ethics can be called “para” computer ethics. Someone who takes a special interest in computer ethics cases, collects examples, clarifies them, looks for similarities and differences, reads related works, attends relevant events, and so on, is learning “para” computer ethics. (I’ve borrowed this term from Keith Miller, who is the first person I ever heard use it.) By analogy with a para medic – who is not a physician, but who does have some technical medical knowledge – a “para” computer ethicist is not a professional ethicist, but does have some relevant special knowledge. A para medic, of course, cannot do all that a physician does, but he or she can make preliminary medical assessments, administer first aid and provide rudimentary medical assistance. Similarly, a “para” computer ethicist does not attempt to apply the tools and procedures of a professional philosopher or lawyer or social scientist. Rather, he or she makes preliminary assessments and identifications of computer ethics cases, compares them with others, suggests possible analyses.

The third level of computer ethics I call “theoretical” computer ethics, because it applies scholarly theories to computer ethics cases and concepts. Someone proficient in “theoretical” computer ethics would be able not only to identify, clarify, compare and contrast computer ethics cases; she or he could also apply theories and tools from philosophy, social science or law in order to deepen our understanding of the issues. Such “theoretical” computer ethics is normally taught in college-level courses with titles like “Computer Ethics,” “Computers and Society,” “Computers and the Law.”

All three “levels of analysis” are important to the goal of advancing and defending human values. Voters and the public at large, for example, should be sensitive to the social and ethical consequences of information technology. Computer professionals and public policy makers should have “para” computer ethics skills and knowledge in order to do their jobs effectively. And scholars must continue to deepen our understanding of the social and ethical impact of computing by engaging in theoretical analysis and research. In reality, of course, none of these three “Levels” of computer ethics is cleanly separated from the others. One blends gradually into the next. Nevertheless, I think it is useful to distinguish them, and I will continue to do so here.

Computer Ethics and the Curriculum

I’m always amazed when I meet computer professionals in business and industry, or even computer science teachers in colleges and universities, who fail to recognize (or perhaps fail to admit) that their profession has social and ethical consequences. Surely, all computer professionals should, at a bare minimum, have “pop” computer ethics knowledge. For this reason, it seems to me that every computer science curriculum ought to address computer ethics issues at least on the “pop” level. This should not be hard to do.

First, however, computer science instructors must admit to each other and to the world that computer technology has social and ethical consequences. Such “sensitized” faculty members would regularly notice computer ethics news stories on television and in the newspapers. They could easily bring photocopies or very brief video tapes of such stories to class. In addition, they could

  1. Regularly express to students and colleagues their own judgments about particular computer ethics cases
  2. Lead an occasional class discussion about such cases, and
  3. Include a relevant question or two on exams or quizzes.

Such teachers do not have to be trained as philosophers or lawyers or social scientists. They don’t even have to have expertise in “para” computer ethics, although having such skills would certainly be an advantage.

Surely, this is the bare minimum amount of computer ethics that a computer science curriculum should provide to its students. To graduate computer science majors with no computer ethics knowledge at all would certainly be irresponsible!

Of course, occasionally doing a bit of “pop” computer ethics in the classroom would not be sufficient to attain national accreditation for a computer science program. In November 1990, the Computer Science Accreditation Commission/Computer Science Accreditation Board (CSAC/CSAB) adopted new criteria for national accreditation. One new criterion states:

  • The social and ethical implications of computing must be included in the program.

According to Professor Joseph Turner of Clemson University, who headed CSAC/CSAB when the latest criteria were adopted,

  • evaluation teams are becoming much more insistent that 1) there be a significant amount of material (at least the equivalent of a few weeks or so of a normal course, and preferably in more than one course) and 2) that the material actually be covered, rather than just being a topic on the syllabus (e.g., at the end of the syllabus) that most instructors never get to. (Turner 1991, p.2)

In addition, Turner says, “An elective course is not relevant to meeting a requirement of the criteria, because all graduates of a program must meet the requirements in order for a program to be accredited.” (Turner, 1991, p.2)

For national accreditation, then, a computer science program must require that every student take some computer ethics. How can this be achieved? At least three different approaches have been suggested:

  • The “whole course” approach
  • The “Case-Study-in-Every-course” approach, and
  • The “capstone software-engineering” approach. Let me discuss each of these, in turn, beginning with the “whole course” approach.

Courses and Textbooks in “Para” Computer Ethics

CSAC/CSAB intentionally made their accreditation criteria rather general, in order to make it possible to fulfill them in a variety of ways. One way to do so would be to require an entire college course – one that is worth between one and three college credits. The course could be offered by the Computer Science Department, or the Philosophy Department, or the Sociology Department, or any other appropriate academic entity on campus. The main resources needed would be a faculty member with appropriate skills and knowledge, plus a good textbook or other quality curriculum materials. There are several books currently available, both at the “para” and the “theoretical” levels; and both types of course are being taught on campuses across the country. Either type would satisfy national accreditation requirements.

Curriculum materials for courses in “para” computer ethics are readily available. A combination of recent magazine and newspaper articles from the neighborhood newsstand, for example, can be supplemented with items from professional journals, government reports and conference proceedings. As long as the instructor has knowledge and skills from “para” computer ethics, such curriculum materials can provide a sound basis for a stimulating and very current “para” course.

A number of “para” computer ethics textbooks now seem poised to hit the bookstores. One new “para” text is Parker, Swope and Baker’s Ethical Conflicts in Information and Computer Science, Technology and Business (QED Information Sciences 1990). This book is the result of an NSF funded project headed by Donn Parker at SRI International. It includes 54 scenarios that raise or illustrate computing and values questions. A panel of 34 people, including computer professionals, business people, philosophers and lawyers, read and discussed the scenarios, then voted on whether the behavior under consideration was ethical or not. The voting results and highlights of the panel’s opinions are included in the book. (An Appendix collects together various “position papers” clarifying or defending opinions of some of the panelists.)

The panel was not a scientifically selected sample, nor was the project intended to be a formal scientific study. Panelists were “selected on the basis of their known interests in ethics in the computer field” (Parker, et al., 1990, p.9); and they were to “suggest general principles concerning ethical and unethical practices in the computer field,” then ultimately “suggest more explicit ethical principles for computer users.” (p.8) The “general principles” applied to each scenario are stated at the end of the discussion of that scenario. The “more explicit ethical principles” are stated in Chapter VII, “Summary of Ethical Issues.”

As a work in “para” computer ethics, Parker, Swope and Baker’s Ethical Conflicts succeeds in raising the reader’s consciousness about computer ethics issues, providing realistic cases to think about, and suggesting some general ethical ideas or rules that could begin to sort out cases and suggest some conclusions. However this book does not provide theoretical tools for analysis. Those who are looking for a scholarly treatise or scientific study to serve as a text in “theoretical” computer ethics will find the book inadequate. To use it in a “theoretical” course, one must supplement it with theoretical materials.

Another recent “para” computer ethics text is Forester and Morrison’s Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing (MIT Press, 1990). As a work of “para” computer ethics, Forester and Morrison’s book has several virtues, but also a serious defect. The virtues make it attractive as a high school or undergraduate text. But the defect could turn it into a nuisance – perhaps even a danger – to student users, to their schools, to the community at large.

The authors gather together a large number of cases illustrating “problems created for society by computers.” Relying heavily upon Peter Neumann’s contributions to Software Engineering Notes (“Risks to the Public in Computer Systems”), and also upon magazine and newspaper reports, Forester and Morrison present case after case of computer crimes, software theft, hacking, viruses, invasions of privacy, computer malfunctions, and computer-caused problems in the workplace. Their descriptions often have the “gee-whiz” flavor of sensational newspaper reports. This makes the book entertaining, even for students who normally find reading to be painful. This is a virtue for a work in “para” computer ethics.

There is, however, a very important problem with the book – a problem that can be traced to Chapter 4 on hacking. At the very least, a textbook should do no harm. I am concerned that Forester and Morrison’s book may not pass this basic test. Although the authors do try to give both sides of the argument about hacking, the following kinds of passages seem to stand out:

  • After all, when one íbreaks’ into a system, nothing has been broken at all – hence there is no obvious intent to cause harm. When a file has been copied or selectively viewed, then what has been stolen? The information is, after all, still there. And if one happens to try out a few programs while browsing through a system, is this almost analogous to seeing someone’s bicycle, riding it for a few meters and then putting it back? Again, what harm has been caused, what crime has been committed? In the eyes of many hackers, only in the most trivial sense could this be considered as unlawful use. (p.60)

One finds in Chapter 4 reference to people who engage in such “cracking” as “modern-day Robin Hoods.”

  • Given that more and more information about individuals is now being stored on computers, often without our knowledge or consent, is it not reassuring that some citizens are able to penetrate these databases to find out what is going on? Thus it could be argued that hackers represent one way in which we can help avoid the creation of a more centralized, even totalitarian government. (p.49)

Hackers are compared with newspaper reporters defending freedom of the press or freedom of information; and an argument is presented that, “in a fair and open society,” hacking that destroys nothing and steals nothing should be tolerated – indeed, that prevention of such hacking would be similar to racial, ethnic or religious repression. (p.50)

Chapter 4 also includes: (1) descriptions of equipment needed for hacking, (2) specific information on how to break into other people’s computer systems, and (3) references to comprehensive how-to-do-it hackers’ guidebooks.

For the above-described reasons, I worry that this book (despite the authors’ efforts to give both sides of the story) may actually encourage students to break into other people’s computer systems – and such students will hack self-righteously, believing that they are defending some noble cause! But just a little thought about “harmless” hacking reveals the tremendous damage that it can do: Is it okay to break into someone’s computer and read private papers, unpublished manuscripts, love letters? Is it ethical to hack into the computer systems of doctors, lawyers, banks, accountants, hospitals, psychiatrists and read the private files of patients, clients and customers? Is this behavior noble? Does one have a right to do it? Clearly it is unethical, as the following two hypothetical scenarios illustrate:

  • A burglar picks the lock (without breaking anything) of your medical doctor’s office. He enters the office, opens various filing cabinets and reads private medical records about you and other patients. He alters no data in the files.
  • A hacker breaks into your doctor’s computer system. Without changing anything or causing the system to crash, he opens various files on the hard disk and reads private medical records about you and other patients.

To me, it seems clear that these two actions are ethically equivalent. In both cases, there has been an egregious violation of the right to privacy. The right to privacy has not been repealed just because computers were invented! Surely private records should remain private whether they are in a filing cabinet or on a disk. After all, privacy is not a little thing to be sacrificed to a hacker’s curiosity. It is a basic human right; and without it, our lives would be much worse. Yet students who read Forester and Morrison’s Computer Ethics may come to consider hacking into private files to be noble!

For all these reasons, I am concerned that this book, if widely adopted as a text, might lead to a new wave of student hacking. School computers would probably be the first targets, then computers at other locations via networks. And the students, if caught, could be expelled from school or arrested!

Courses and Textbooks in “Theoretical” Computer Ethics

If one prefers to offer a course in “theoretical” computer ethics, it is always possible to use two textbooks – one in “para” computer ethics and one in “theoretical” computer ethics. Indeed, I intend to recommend below that one begin a course with “para” computer ethics (either from a textbook or other supplemental materials), even if the course will eventually become very theoretical.

Textbooks in applied ethics usually start with a chapter or two of theory. Later chapters then apply those theories to concepts or problems in specific subject matter like medicine, business, computing or the environment. Most applied ethics textbooks have this “theory-first” structure, because applied ethics courses traditionally begin with theory and consider specific cases later.

Such courses can be reasonably successful (I have had some success with them myself). Nevertheless my own experience indicates that a better approach is to begin with realistic cases and scenarios, rather than theory. After all, at the start of a course most students relate better to specific cases than to abstract theories and definitions. And by choosing cases which raise troubling ethical and social questions, the teacher causes the student to “feel the grip of a problem” and long for clarification or guidance. At that point, once the student feels a need, the introduction of theory is welcome, rather than a frightening or “boring” development. In a “theoretical” computer ethics course, therefore, I recommend that the class begin with “para” computer ethics materials – especially provocative cases and scenarios – then move on to “theoretical” considerations.

Several “theoretical” textbooks have been available since the mid 1980s, and publishers are starting to bring more to market. (I describe only two example textbooks here.) For those interested in a philosophical approach, Deborah Johnson’s Computer Ethics (Prentice-Hall, 1985) has been the standard textbook across the country. For a social science approach, Judith Perrolle’s Computers and Social Change: Information, Property and Power (Wadsworth, 1987) offers an impressive sociological tour de force. (See “Selected Bibliography” below for other textbook suggestions.)

Deborah Johnson’s Computer Ethics begins with two introductory chapters: one on philosophers’ ethical theories (utilitarianism and Kantianism) and one on professional ethics. In addition, there are four computer-specific chapters covering: liability for computer malfunctions, computers and privacy, computers and power, and ownership of software.

Johnson’s book is very clearly written, and students find it easy to understand and use. It contains effective applications of philosophical theories, like John Locke’s account of property or the ethical theory of utilitarianism. As a work of “theoretical” computer ethics, the book is most successful in Chapter 4 where James Rachels’ analysis of the concept of privacy is effectively used to shed light upon issues in computing and privacy.

A shortcoming of Johnson’s book is its brevity. Many of today’s “hot” topics are not even included – topics like computer “viruses” and other malicious software, computer-human interface questions (ergonomics), replacement of human decision-making by computers, using computers to empower the disabled, using computers in war, and so on. Because of this, the book is rather dated. Nevertheless, appropriately supplemented with a second text or recent journal articles, Johnson’s Computer Ethics remains an effective teaching tool – the best philosophical text on the market.

Judith Perrolle’s Computers and Social Change (Perrolle 1987) is intended to provide “a framework for understanding the social context and consequences of information technology, including the role of information in human history.” (Preface, xvii) It is divided into four sections, which Perrolle describes as follows:

  1. The first examines the social context of information technology, providing a conceptual framework for understanding the computer as an information-processing tool capable of producing enormous changes in human life far beyond the immediate purposes for which it was designed.
  2. The second part of the book considers immediate effects of computers by examining the subject of ergonomics – the human/technology interface.
  3. The third part of the book analyzes the computer transformation of work.
  4. The fourth section of the book deals with the computer’s effects on information, property, and power in democratic institutions.

This book is rich with ideas, theories, concepts and analyses. There are subsections on a wide variety of topics, including for example the scientific study of information, the computer as a tool, theories of social change, computers and capitalism, the psychology of human/computer interaction, computer-aided socialization, the meaning of work, office automation, expert systems in the professions, information as property, computers and social control, information and the public interest, the social future of information, and so on.

Most computer science undergraduates will find Perrolle’s book to be very challenging, but also very rewarding and worth the struggle. In my judgment, the book is best suited to students late in their college careers, when they are more mature and are likely to have had some courses in history and social theory. Of course, there are other quality textbooks besides the above-described ones (see the selected bibliography below), and new texts are coming onto the market. A “definitive” textbook that captures most of the market has not yet been published. The next generation of texts will likely combine both the “para” and “theoretical” levels; and they will employ more than one discipline (for example, philosophy, sociology and law).

The Case-Study-in-Every-Course Approach

So far, I have focussed attention upon whole courses in computer ethics. However, in his article “Integrating Computer Ethics into the Computer Science Curriculum” (Miller 1988), Keith Miller makes an impressive case for including computer ethics components in all (or most) computer science courses in the curriculum. According to Miller,

  • the societal and technical aspects of computing are interdependent. Technical issues are best understood (and most effectively taught) in their social context, and the societal aspects of computing are best understood in the context of the underlying technical detail. Far from detracting from students’ learning of technical information, including societal aspects in the computer science curriculum can enhance students’ learning, increase their motivation, and deepen their understanding. (Miller 1988, p.38)
  • [Therefore, computer science] Professors do not need to artificially force ethics into their courses, and ethics need not force something else out. (p.39)

To illustrate his point, Miller takes the eight required courses in the ACM Curriculum ’78 report (CS 1 through CS 8) and shows how to integrate helpful and relevant computer ethics case studies into each course. The case studies all illustrate how technical concepts can have value implications. For each course, Miller

  1. Identifies a central technical concept
  2. Introduces an example case to which the concept applies
  3. Shows how an ethical question arises from the case, and
  4. Raises a series of ethical and social questions.

(Miller’s article is a first-class example of “para” computer ethics; and I strongly recommend it.) It would be wonderful – a dramatic improvement in the computer science curriculum – if every undergraduate course in computer science included helpful computer ethics cases and analyses like Miller’s. Essentially, what Miller recommends is that teachers in computer science departments become adept at “para” computer ethics analysis and use it in their classes.

My own experience with faculty colleagues in a number of computer science departments, however, suggests that they are reluctant to learn “para” computer ethics skills or to use them in their courses. Computer science faculty members would much prefer, it seems, to have someone at the college offer a separate course in computer ethics, while they continue to do what they have always done – ignore the social impact of computing. This is a pity, for I believe Miller’s suggestion is an excellent one to improve the computer science curriculum.

In any case, it is unclear whether a computer science program that fits Miller’s description would achieve national accreditation. Presumably it would, if

  1. the accreditors are satisfied that nearly every course in the program would actually include “para” computer ethics components (not merely list them in the syllabus, but never get to them!), and
  2. the sum total of all such components is equivalent in time and effort to roughly a one-credit college course.

The Software Engineering “Capstone” Approach

Another way to integrate computer ethics into the computer science curriculum has been tried with some success by Don Gotterbarn (Gotterbarn 1991) and others. This “software engineering approach” combines computer ethics and a software engineering project late in the student’s college career. Gotterbarn reports that he has done this in two different ways:

  1. By integrating ethical issues into a project-oriented software engineering course, and
  2. By integrating a large software-development project into a computer ethics seminar for advanced undergraduates in computer science.

In both courses, students are involved in the nitty-gritty tasks and decisions required to develop a piece of software; and in the process, various computer ethics issues arise and are discussed. According to Gotterbarn, this is a more practical and more effective way to teach computer ethics than the traditional “theoretical” computer ethics course, especially

  • if the intent is to meet the special needs of the computer science student… by conceiving computer ethics more narrowly as the study of ethical issues which face the practicing computer professional (Gotterbarn 1991, p.2)

Gotterbarn argues that such a “capstone” course taken late in a student’s college career is preferable to an early “theoretical” course:

  • There are several reasons for making it a late course. For example, many ethical issues faced by the computing professional are tied directly to his or her use of professional skills. The beginning student is not yet in a position to understand them. But capstone courses taken late in the student’s career can (1) tie together elements from all the theoretical courses, (2) convey a sense of professional responsibility not covered in other courses, and (3) deal with the true nature of computing as a service to other human beings. (Gotterbarn 1991, p.1)

Gotterbarn makes an impressive plea for such “capstone” courses. There are a number of advantages to their being late in a student’s college career. On the other hand, if a late course is the only one in which a student is exposed to computer ethics issues, this could leave the mistaken and dangerous impression that ethical concerns are separate, extra considerations which can and ought to be separated from the rest of computer science. This would be a mistake. It seems to me that a balanced computer science curriculum would integrate computer ethics considerations in a variety of ways throughout the curriculum. (See below for my recommendations.)

Desired Outcomes of Computer Ethics Instruction

No matter which teaching approach one uses, the educational goals of computer ethics instruction are the same:

  1. To sensitize students to computer ethics issues
  2. To provide tools and methods (either “para” or “theoretical”) for analyzing cases
  3. To provide practice in applying the tools and methods to actual or realistic cases
  4. To develop in the student “good judgment” and “helpful intuitions” for spur-of-the-moment decision making as computer professionals and computer users.

Recommendations

Computer ethics should not be considered “something extra,” an after thought to be “added on” to the curriculum. And it certainly should be allocated more than a single credit out of 124 undergraduate credits needed to graduate from a typical college. Computing has become a complex and growing part of society – with profound and deep social and ethical implications! The only responsible way to include computer ethics in the curriculum, therefore, is to integrate it thoroughly:

  1. Every computer teacher should become sensitized to the social and ethical impacts of computing. He or she should engage in “pop” computer ethics by discussing in the classroom current cases that are reported on TV or in the newspapers.
  2. Every computer science program should include some teachers skilled in “para” computer ethics who integrate case studies into their courses in the way that Keith Miller recommends.
  3. Every computer science program should require a three credit course in computer ethics to be taken relatively late in the student’s career. This could be a “capstone” course, or a “para” computer ethics course, or a “theoretical” course using theory from philosophy or social science or law to analyze cases and deepen student understanding.
  4. Every college worthy of the name should have at least one scholar doing “theoretical” computer ethics research to deepen and broaden our society’s understanding of the impact of computing upon human values.

Such a computer science program – and such a college – would certainly fulfill national accreditation requirements in computer ethics. In addition, and much more importantly, the program and the college would thereby strive to make computing technology advance and protect human values, rather than damage them. Society should accept nothing less from its colleges and universities!

Southern Connecticut State University

Selected Bibliography

Geoffrey Brown, The Information Game: Ethical issues in a Microchip World, Humanities Press International, l989.

David Burnham, The Rise of the Computer State, Random House, l984.

Terrell Ward Bynum, ed., Computers and Ethics, Basil Blackwell, 1985. (A special issue of the journal Metaphilosophy, October 1985)

Computing Curricula 1991: Report of the ACM/IEEE-CS Joint Curriculum Task Force, ACM Press and IEEE Computer Society Press, 1991.

Charles Dunlop and Rob Kling, eds., Computerization and Controversy: Value Conflicts and Social Choices, Academic Press, l991.

Tom Forester and Perry Morrison, Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, The MIT Press, l990.

Batya Friedman and Terry Winograd, eds., Computing and Social Responsibility: A Collection of Course Syllabi, Computer Professionals for Social Responsibility, l990.

Donald Gotterbarn, “A ‘Capstone’ Course in Computer Ethics” in Terrell Ward Bynum, Walter Maner and John L. Fodor, eds., Teaching Computer Ethics, Research Center on Computing & Society, 1992.

Deborah G. Johnson, Computer Ethics, Prentice-Hall, l985.

Deborah G. Johnson and John W. Snapper, Ethical Issues in the Use of Computers, Wadsworth Publishing Company, l985. (Out of print)

Thomas Milton Kemnitz and Phillip Vincent, Computer Ethics, Trillium Press, l985.

Keith Miller, “Integrating Computer Ethics into the Computer Science Curriculum,” Computer Science Education, Vol. 1, 1988, 37 – 52. (Reprinted in Terrell Ward Bynum, Walter Maner and John L. Fodor, eds., Teaching Computer Ethics, Research Center on Computing & Society, 1992.)

Judith A. Perrolle, Computers and Social Change: Information, Property and Power, Wadsworth, l987.

Jane Robinett and Ramon Barquin, eds., Computers and Ethics: A Sourcebook for Discussions, Polytechnic Press, l989.

Donn Parker, Susan Swope and Bruce N. Baker, Ethical Conflicts in Information and Computer Science, Technology and Business, QED Information Sciences, l990.

Kathryn Schellenberg, ed., Computers in Society, 3rd ed., Dushkin Publishing Group, l990.

Sherry Turkle, The Second Self: Computers and the Human Spirit, Simon & Schuster, l984.

Joseph Turner, An e-mail message to Keith Miller regarding computer science accreditation, March 27, 1991. (Included in the “Track Portfolio” of the Teaching Computing & Values “Track Pack” at the National Conference on Computing and Values, New Haven, CT. August 1991)

Integrating Computer Ethics into the Computer Science Curriculum

Keith Miller

The societal and technical aspects of computing are interdependent. Technical issues are best understood (and most effectively taught) in their social context, and the societal aspects of computing are best understood in the context of the underlying technical detail. By including the study of computer ethics in their computer science curriculum, educators can increase students’ motivation and deepen their understanding. Using a case study approach, the value dimensions of technical issues can be naturally incorporated into existing lectures and used with existing textbooks. Specific case studies related to courses from ACM’s Curriculum ’78 illustrate the utility of this approach.

The ACM’s Curriculum ’78 (ACM Curriculum Committee, 1978), which has dramatically influenced what computer science departments teach, displayed ambivalence towards including the societal aspects of computing in a computer science curriculum. Curriculum ’78 includes a specific elective course (CS 9) devoted to the “Societal Aspects of Computing,” but the committee’s report also suggests that these issues can be discussed throughout the curriculum. Exactly what issues are to be covered in which courses was not clarified. Recently publications on computer science curricula have given even less attention to the societal impact of computing. For example, Gibbs and Tucker (Gibbs & Tucker, 1986) omit any reference to these aspects in their “Model Curriculum for a Liberal Arts Degree in Computer Science.”

Both Curriculum ’78 and the Gibbs and Tucker curriculum reinforce the idea that computing can legitimately be separated into technical issues and non-technical issues. This article is based on a contrary notion: that the societal and technical aspects of computing are interdependent. Technical issues are best understood (and most effectively taught) in their social context, and the societal aspects of computing are best understood in the context of the underlying technical detail. Far from detracting from students’ learning of technical information, including societal aspects in the computer science curriculum can enhance students’ learning, increase their motivation, and deepen their understanding.

“Societal aspects of computing” covers a broad range of topics – much too broad to be discussed in one article. This article focuses on the process of ethical decision-making by computer professionals, or “computer ethics.” A related but distinct topic is the ethical impact that computer technology has on society as a whole (Baum, 1980). This article includes such aspects only as they affect an individual computer professional’s decisions.

Can Computer Ethics Be Taught in the Computer Science Curriculum?

Computer scientists as well as ethicists (Mahowald & Mahowald, 1982) have raised serious questions about teaching ethics in a science curriculum. The issues they raise include the following:

  • The curriculum already is overcrowded. Including ethical issues requires that important technical issues be ignored or given less attention than they should be given.
  • Computer science faculty have little experience in ethics, and they are uncomfortable teaching in this area.
  • Professors unschooled in formal ethical techniques may fall into the trap of preaching a moral code (an abuse of their position) instead of raising questions, elaborating possible answers, and exploring justifications (activities which properly belong to ethics).
  • Any ethics taught will be diluted at best and possibly erroneous.

These legitimate reservations require a careful delineation of realistic goals and straightforward techniques in the teaching of computer ethics by computer science professors. An important but limited goal is to help students become more aware of ethical questions related to the use of computers. This goal does not require computer science professors to present formal ethical techniques for confronting those questions; these matters are best left to professionals in the field of ethics. Instead, a computer science professor can lead students to recognize that such ethical questions exist, and to explore how those questions may arise in their future professional lives.

The teaching techniques described below are intended to raise questions. The most careful reasoning about answers to these questions will require study outside the traditional computer science curriculum. Ideally, both professors and students will be motivated to learn more about ethical theory and its application to computing. A course team taught by an ethicist and a computer scientist is a productive setting for such learning (Baum, 1980), but a discussion of such a course is outside the scope of this article.

Technical Concerns, Ethical Questions

Several computer science educators have asked the author of this article, “what would you take out of the curriculum to make room for computer ethics?” The intent of this article is to show how computer ethics is an integral part of the technical issues already being taught. Professors do not need to artificially force ethics into their courses, and ethics need not force something else out. Instead, the value dimensions of technical issues can be naturally incorporated into existing lectures and used with existing textbooks.

Although there are numerous strategies for incorporating computer ethics into a curriculum (The Hastings Center, 1980), this article focuses on one: a case study approach. The idea is straightforward: the professor distributes or presents material concerning the use of computers and then students and the professor discuss questions about the material. Cases can be fictionalized scenarios, news items, book excerpts, interviews, and the like. Ideally, the professor should encourage students to question assumptions and to identify the values at stake in the cases. The case studies can show that technical computer science concepts are intertwined with questions society must ask and answer when people use computers.

In order to illustrate how case studies can be integrated into a computer science curriculum, this section includes a short discussion of the required courses in Curriculum ’78, classes CS 1 ´ CS 8. For each class, the section identifies one technical concept or theme typically covered in such a class. A case study illustrates how the technical concept has value implications. A short description of ethical questions accompanies each case study.

CS 1: Computer Programming I

CS 1: Computer Programming I

  • Technical concept: Garbage in•garbage out

CS 1 begins an emphasis on good programming style, an emphasis that continues through the entire curriculum. One element of good programming style is checking input data. In even the simplest programs, students are taught to test for validity before using data in further processing. The increasingly interactive environments used by CS 1 students underscore the importance of the GIGO concept.

  • Example case: The FBl’s National Crime Information Center

The FBI’s National Crime Information Center is a database system that includes criminal history records, including outstanding warrants. A 1983 Office of Technology Assessment study (Laudon, 1983) claims that less than half the records in this database are complete, accurate, and unambiguous.

  • Ethical question: Who is responsible for the accuracy of stored information?

Hardware and software developments have made information a commodity in demand. Machine readable information is reasonably priced and readily available in mailing lists, credit bureaus, and numerous data bases. However, society has yet to deal effectively with difficult questions about the quality of information in databases (Burnham, 1985; Davis, 1987). Here are some questions specifically about the FBI database:

What are the potential benefits of a system such as the National Crime Information Center? If the information included in the database were more accurate, could there still be objections to its use? How many different databases include your name? How many of these databases include your social security number? Is all the information in those databases accurate? How do you know? To what extent is a computer programmer responsible for the accuracy of information in a database? Does inclusion in a database lend authority to information that may be unwarranted? What kind of automatic data checks might help reduce erroneous information in the FBl’s files? What kind of manual data checks might help? What economic and political factors will encourage and discourage additional data checking in this and other databases?

If students seem mystified about why someone innocent of a crime might object to aspects of the FBI database, the professor can include case studies about people victimized by incorrect or misleading information in large databases (Rosenberg, 1986). However, if students have concluded that such databases are uniformly evil, the professor can introduce cases in which law enforcement officials use computers to catch otherwise elusive criminals (Shannon, 1987).

CS 2: Computer Programming II

CS 2: Computer Programming II

  • Technical Concept: Searching and sorting techniques

CS 1 introduces sorting, and that theme is continued in CS 2. Sorting is required for the most efficient searching techniques.

  • Example case: Police department clerk provides arrest records

This case study is fictional, adapted from a collection of ethical case studies involving computers (Parker, 1981).

A computer programmer was in charge of a police database system which included access to local and national arrest records. The programmer had a friend in the personnel department of a local corporation, and the friend often did background investigations of potential employees. Job applicants signed release forms authorizing their employer to do background investigations. As a personal favor, the programmer gave copies of arrest records to the friend in the personnel department. Since police arrest blotters are in the public domain, the friends saw no ethical problems in providing complete records of an individual more conveniently. The programmer did not request permission to perform this service, since it was regularly performed for courts, attorneys, law enforcement organizations, and banks.

  • Ethical question: Who owns information?

As discussed in conjunction with CS 1, computers make information available cheaply and quickly. As students begin to understand how they can facilitate access to this information, the question about who will use this access becomes important. Computer professionals often have increased access to important and potentially damaging personal information, and they will make decisions about how that information is or is not disseminated and protected. Some discussion questions about the police department programmer follow:

Was the security employee justified in seeking the arrest records? Was it ethical for the clerk to supply them? Does a corporation have the right to examine police records of potential employees? Do private citizens have the right to examine their own police records? Should individuals and organizations both have convenient access to these records? If so, who should pay for this access? Most of the people routinely given access to the records were associated with the legal system, but banks were also given access. Why do you think banks were given access? If banks were the only organizations given access, do you think this was appropriate?

CS 3: Introduction to Computer Systems

CS 3: Introduction to Computer Systems

  • Technical concept: Assembler code is machine efficient but difficult to write.

An important objective in CS 3 is to familiarize students with the finely detailed low-level control available with assembler programming. Another important point is the difficulty of controlling the complexity of a large assembler program, even with the addition of a macro facility.

  • Example case: Malfunction 54

In 1986, at least two people died and at least one other was maimed after receiving excessive radiation from a linear accelerator radiation machine called the Therac 25 (Joyce, 1986). The first death occurred after a treatment on March 21. Technicians and doctors carefully examined the equipment and the incident, and concluded the machine was safe. On April 11, however, a second patient was given what proved to be a fatal overdose from the machine. After this incident, the problem with the machine was finally discovered: a bug in the assembler language program which controlled the machine.

The Therac 25 has two modes: x-ray mode and electron beam mode. In x-ray mode, a very high powered beam strikes a heavy metal plate; that plate gives off x-rays which are focused on the patient. In electron beam mode, the plate is retracted and a much lower powered beam of electrons is focused directly on the patient. A technician controls the Therac 25 using a PDP-ll minicomputer. An assembler program on the PDP interprets the technician’s commands, and controls the radiation machine as a peripheral. The software includes two different methods for correcting a mistake when entering commands: retyping the command completely, or using the up-arrow to edit the mistaken command. Under one particular set of circumstances, when the technician used the up-arrow edit to change from the x-ray mode to the electron mode, the assembler program retracted the heavy metal plate (a correct action) but did not lower the power of the beam (an incorrect omission). Thus, the high power beam was focused directly on the patient, delivering a lethal dose of radiation. Whenever this situation occurred, a sensor detected the large amount of radiation and flashed a warning on the monitor, “MALFUNCTION 54.” The significance of this warning was not understood until after April 11.

The particular circumstances which led to MALFUNCTION 54 were rare. In tens of thousands of treatments over a period of years, the manufacturer claims that no such incidents had occurred prior to March, 1986. Perhaps, because of this spotless safety record, both the owners of the machines and the manufacturer assumed that some extraordinary event had caused the first injury, and therefore the 11 installations that owned Therac 25 units continued using them until after the second deadly accident.

  • Ethical question: When are computer bugs unethical?

What aspects of assembler programming make testing particularly difficult? What are the advantages of assembler programming? What kinds of tasks do you think are appropriate for assembler language programming, and which tasks are not? Testing is always incomplete; what are the ethical implications of this when programming, especially when using assembler? Malfunction 54 killed and maimed; what responsibility do the following people have for the accidents: the assembler programmer; the manager in charge of the programming; the manufacturing company that produced the entire machine; the hospitals that purchased the machine; the technicians who ran the machine; the engineers who tested the machine after the first accident but before the last.

CS 4: Introduction to Computer Organizations

CS 4: Introduction to Computer Organizations

  • Technical concept: Parity error detection and correction enhance reliability.

The inclusion of parity and other error detection and correction mechanisms has dramatically increased the reliability of computing hardware. CS 4 introduces other details about communication protocols between components of a computer system.

  • Example case: Reliability in an embedded system (fictional)

A computer scientist and an electrical engineer have formed a partnership. They are developing an embedded computer system to control braking on automobiles. One of the partners wants to include an additional error detection bit in all the hardware. The other partner insists that this added hardware will make their product prohibitively expensive: “If nobody ever buys the thing, the added safety is useless.” Both partners agree that, even without the extra error detection bit, their design should be a safety improvement over the currently popular design.

  • Ethical question: How reliable is reliable enough?

The tradeoffs between time, space, safety, and cost are important in hardware and software. These tradeoffs include considerations of values as well as technical measurements. What are the competing interests that affect the decisions being made in this case? How much is safety worth in this case? Does the maker of an embedded system have a responsibility to make the safest product possible? If so, how does one determine what is the safest possible? If not, how does one determine what is safe enough? Who will ultimately make the decision about safety in this case? Who will be most directly affected by that decision?

CS 5: Introduction to File Processing

CS 5: Introduction to File Processing

  • Technical concept: A database gives a measure of control over access to the information stored in the database.

The distinction between a database and less formally organized data is the degree of control that can be exercised in a database system. A student learns to distinguish between the perspectives of a database administrator, an applications programmer, and an end user.

  • Example case: The display of patients waiting for a transplant

This fictional case is similar to the dilemma faced by physicians dealing with kidney transplants (Starzl et al., 1987).

A consultant is hired to design and implement a program that allows a large medical center to communicate with a nationwide organ donation database. Much of the interface software will be adapted from software made available by the administrators of the national database. The medical center uses the database to locate organs they need for transplants and to identify patients who need any organs the center has but cannot use.

In the section of the code that displays patients who need a particular organ, the consultant finds that the patients are ordered alphabetically. If more than 20 patients are waiting for a particular organ, the first 20 are displayed and the user can either select one of these 20 or move on to view the other patients. The consultant decides that this system gives an unfair advantage to patients with last names early in the alphabet. The consultant goes to the physicians in charge and describes the problem.

The physician in charge explains that the issue of which patients get an organ is very hot politically, and that the alphabetic listing was used by the national database in order to avoid the issue. He suggests that the medical center itself is struggling with the issue, but wants the listing alphabetically until they make a final decision. The physician has no idea when this decision will be made, and the consultant will probably no longer work for the center by then. The consultant feels uncomfortable with installing the program as is, but also feels uncomfortable holding up the project which is urgently needed to speed up organ donation procedures.

  • Ethical question: How much control can programmers expect to have over the use of their programs?

Information is power, and controlling that information is a heavy responsibility. Something as trivial as ordering entries on a screen may change lives dramatically. In the case study above, what obligations does the consultant have to the medical center? What obligations does the consultant have to the patients listed in the database? Discuss the responsibilities of the national database administrators, the medical center as a whole, and the physician in charge of the organ transplants in particular. A computer professional is paid for technical expertise; is a computer professional ever justified in volunteering opinions on non-technical issues? What are some options open to the consultant at this point? Are there ways to change the interface program which would reduce the bias towards certain patients? Who should bear the costs of any software needed to change the interface program?

CS 6: Operating Systems and Computer Architecture I

CS 6: Operating Systems and Computer Architecture I

  • Technical concept: Operating systems make the resources of a machine available through a high level interface.

Students are encouraged to think critically about the features of various operating systems in CS 6. Often students are struck as much by the similarities between operating systems as the differences

  • Example case: The look and feel of an operating system

A small software firm is developing an operating system for an announced microcomputer. Because their time is short, the decision is made to borrow heavily from existing operating system designs in developing their product. At one point, the team leader insists that some command names be altered to avoid legal problems. Citing the similarities between MS-DOS, UNIX, and many UNlX-like operating systems, one member of the team suggests that any correspondence between the new system and these older systems will help orient users to the new system, and that the new system will still be significantly different from previous systems.

  • Ethical question: When is sharing ideas stealing products?

The legal issues involved with two programs that have the same “look and feel” are still being resolved in the courts. Today, however, computer professionals have to make decisions based not only on legal concerns but also on what they believe to be right. In the case above, what obligations do team members have to their team leader and their company? What obligations does the team leader have to his team members and to the company? What obligations, if any, are owed to the developers of the operating systems that are serving as the basis of the new system? If developers are more restricted in emulating the look and feel of previously marketed software, who will benefit? If developers are less restricted, who will benefit?

CS 7: Data Structures and Algorithm Analysis

CS 7: Data Structures and Algorithm Analysis

  • Technical concept: The analysis of an algorithm requires complexity information and realistic upper bounds on the amount of data to be processed.

Students in CS 7 are encouraged to look more precisely at algorithms and data, and to make judgments about the practical application of these concepts to large scale projects. The course stresses the importance of quantitative methods in designing software.

  • Example case: Star Wars debate

Ever since President Reagan announced the Strategic Defense Initiative, computer scientists have been engaged in a lively debate on the merits of various proposals. All the current SDI proposals include a large computing system, and computer science students learning about complexity analysis are in a good position to intelligently examine arguments about SDI. Many different articles present technical opinions on the computational challenges inherent in SDI, including Parnas, 1985; Lin, 1985; Moore, 1985; Ornstein, 1986; New York Times Service, 1987.

  • Ethical question: How do we distinguish between technical judgments and moral choices?

What are the technical merits of the arguments for continuing SDI research? What are the technical merits of the arguments against continuing SDI research? What are the technical merits for and against early development of an SDI system? What are the obstacles for an accurate estimate of the eventual complexity of an SDI system? In the article read, can you discern any political bias in the technical analysis presented? Assume that you believe that SDI as it is currently proposed is fatally flawed; could you take a job in SDI research because you believe that SDI research will have positive results that are not now predictable? Assume that you believe that SDI as currently proposed is realistic; would you give expert congressional testimony which deliberately oversimplified the situation so that the public would understand the concepts, even though they wouldn’t get a true picture of the complexity?

CS 8: Organization of Programming Languages

CS 8: Organization of Programming Languages

  • Technical concept: Newer programming languages encourage data abstraction to improve reliability and to allow formal proofs of correctness.

Formal specifications and proofs of correctness have influenced programming language development dramatically. Correctness proofs examine agreement between a formal specification and its implementation.

  • Example case: Nuclear power plant specification and implementation

This fictional case study is based on a scenario presented by Nancy Leveson (Leveson, 1986).

Following a detailed English specification from a systems analyst, a programmer produces code that controls safety features in a nuclear power plant. One part of the specification states:

  • Whenever one of the plant sensors discovers a potentially dangerous situation, the task monitoring these sensors should shut down all plant systems. When plant personnel have rectified the situation that caused the exceptional sensor condition, the program will allow a manual override that will restart the plant systems.

The programmer tests the code, and installs it at the plant. The systems analyst views the programmer’s test results, and attests to the correctness of the program. The program is installed and runs for six months without incident.

One component of the nuclear power plant controls fuel rods and flow of water into the reactor in order to regulate the temperature of the reaction. One of the sensors in the reactor has a hardware failure, and gives a false, abnormally high reading. The program controlling safety immediately shuts down all reactor systems. Unfortunately, at the moment the sensor hits, the valve for the cooling water had just begun to open because the temperature was starting to rise in the reactor; insufficient water gets to the reactor because of the shut down, and the temperature continues to rise. While the sensor in the reactor is being replaced, the reactor overheats, and some radioactive steam is emitted into the atmosphere as the pressure builds up. A technician notices the problem and, even though it is against safety procedures, manually overrides the safety system so that the cooling water valve opens. The communities surrounding the plant are aroused both by the radioactivity that was released and the potential disaster that was narrowly avoided.

The systems analyst in this case blames the nuclear physicist who signed off on his specifications. The physicist blames the systems analyst for overlooking the obvious. Both the analyst and the physicist claim that the programmer should have done more testing, The communities around the plant want the plant shut down.

  • Ethical question: What is the difference between assessing blame and taking responsibility?

Who is responsible for correctness? In this case, discuss the responsibilities of the physicist, the systems analyst, and the programmer. Is it possible that everyone can do a good, thorough job and still allow a dangerous situation to occur? Discuss any weaknesses of formal proofs of correctness; what ethical implications are there when correctness proofs are impractical or suspect? Discuss methods by which errors like the one described above can be caught before they become dangerous. If these methods require additional investments of time and money, who should pay the additional costs? Do you think that the communities around the plant would be willing to pay extra for power to ensure safer power plants? Do you think that communities far away from any plant should pay the same additional costs?

Sources of Information

Computer ethics materials have become increasingly available, and professors interested in this area should have little trouble gleaning interesting material from magazines, newspapers, and computer science journals. This section lists several publications that focus more directly on computer ethics concerns.

This list is neither exhaustive nor representative. For example, it includes no explicit references to artificial intelligence, despite the fact that a sizable literature on that specific topic has developed. The list seeks only to provide a beginning for someone new to the field.

Books

1. The Hastings Center Series on the Teaching of Ethics

The Hastings Center has published a nine-book series on the teaching of ethics. Three of the books are of particular interest to computer science professors seeking guidance in teaching ethics:

  • I. The Teaching of Ethics in Higher Education, Sissela Bok and Daniel Callahan, project co-directors
  • Vll. Ethics and Engineering Curricula, Robert J. Baum
  • IX. Ethics in the Undergraduate Curriculum, Bernard Rosen and Arthur L. Caplan

These books were published in 1980. The address of the Hastings Center is 255 Elm Road, Briarcliff Manor, NY 10510.

2. Ethical Conflicts in Computer Science and Technology. Donn B. Parker, AFlPS Press, Arlington, Virginia, 1981.

This book includes 47 short fictional scenarios and an introduction that describes how these case studies were used in workshops. Workshop participants scored ethical judgments made in the scenarios, and their “votes” were included in the book. This book is relatively old, and some of the cases seem fairly pat. However, there are many cases to choose from, and they are in a format that is easy to share with students. The book has professional codes as appendices, including the ACM Code of Professional Conduct.

3. Computer Ethics. Deborah G. Johnson, Prentice-Hall, Englewood Cliffs, New Jersey, 1985.

In about 100 pages Deborah Johnson gives a concise explanation of ethical theory and professional ethics and then applies this theory in four areas: liability for malfunctions in computer programs, computers and privacy, computers and power, and the ownership of computer programs. The book is particularly effective in introducing computer science students to formal ethical argument.

Collection of Articles

  1. Ethical Issues in the Use of Computers. Deborah G. Johnson and John W. Snapper, (eds.) Wadsworth Publishing Company, 1985. [out of print] (This anthology includes 33 articles that cover a wide range of ethical issues involving computers. The book tries to provide the following organization for each topic discussed: an editor’s overview, material on ethical theory, and an application of that theory in a specific case. When this organization is followed closely, the presentation is particularly useful for the case study approach described above.)
  2. Computers and Ethics. Terrell Ward Bynum, (ed.) Basil Blackwell Publisher, Ltd., Oxford. 1985.(This booklet is a reprint of the October 1985 issue of Metaphilosophy (Volume 16, No. 4), a special issue dedicated to computer ethics. The booklet includes an introduction, two book reviews, eight articles, and a bibliography. Two of the articles focus on “the philosopher as teacher,” one of which proposes a course on computer ethics. For professors seeking to do research in computer ethics, the bibliography is particularly useful.)

Periodicals

  1. Computers and Society. Published by the ACM Special Interest Group on Computers and Society, ACM Headquarters, 11 West 42nd Street, New York, NY 10036.(This quarterly includes announcements as well as contributed articles. As the title suggests, its interests are broader than strictly computer ethics.)
  2. CPSR Newsletter. Published by the Computer Professionals for Social Responsibility, P.O. Box 717, Palo Alto, CA 94301.(This publication is also a quarterly, and includes contributions from CPSR members. The subjects are generally political as well as ethical in nature. The group has been particularly active in nuclear and SDI issues.)
  3. Software Engineering Notes. The newsletter of the ACM Special Interest Group on Software Engineering, ACM, 11 West 42nd St., New York, NY 10036.(This monthly newsletter includes technical articles on software engineering, but the section most appropriate to this article is called “Risks to the Public in Computers and Related Systems.” The RISKS section presents short, informal descriptions of computer or systems related accidents, crimes, disasters, or potential disasters. The people who contribute to this section seem particularly sensitive to the human costs of computer systems.)

Conclusions

Whether or not we admit it, computer science professors teach about computer ethics. If we ignore these issues, we teach that they are unimportant. By integrating computer ethics and technical concepts, we help students better understand computing and prepare them for responsible leadership in a society that increasingly relies on computers and computer professionals.

Acknowledgments

The author acknowledges the assistance of Bethany Spielman who suggested numerous improvements in this article. Her expertise in ethics inspired the author’s interest in this field. Mike Wessels and the other members of the Ford Foundation Seminar on Computers and Society were very encouraging, and Diana Harris has been both helpful and patient. Any mistakes or misjudgments that remain in this article are solely the author’s.

The College of William and Mary

References

1. ACM Curriculum Committee on Computer Science. Curriculum ’78. Recommendations for the undergraduate program in computer science. Communications of the ACM 22, 3 (March 1979), 147 – 166.

2. Baum, R.J. Ethics and Engineering Curricula. The Hastings Center, Briarcliff Manor, New York, 1980, pp. 1 – 2, 33 – 38.

3. Burnham, D. Databases. In Ethical Issues in the Use of Computers, Johnson, D.G. and Snapper, J.W., eds. Wadsworth, Inc., Belmont, CA, 1985, pp. 148 – 171.

4. Davis, B. Abusive computers. The Wall Street Journal 210, 37 (August 20, 1987), pp.1, 12.

5. Gibbs, N.E. and Tucker, A.B. A model curriculum for a liberal arts degree in computer science. Communications of the ACM 29, 3 (March 1986), 202 – 210.

6. The Hastings Center. The Teaching of Ethics in Higher Education. The Hastings Center, Briarcliff Manor, NY, 1980, pp. 68 – 71.

7. Joyce, E.J. Malfunction 54: Unraveling deadly medical mystery of computerized accelerator gone awry. American Medical News (October 3, 1986), pp. 1, 13 – 17.

8. Laudon, K.C. Data quality and due process in large record systems: Criminal record systems. Pre-publication draft (July 1983), p. 23.

9. Leveson, N.G. Building safe software. In Proceedings, Compass ’86: Computer Assurance (July 7 – 11, 1986), 37 – 50.

10. Lin, H. The development of software for ballistic-missile defense. Scientific American 253, 6 (December 1985), 46 – 53.

11. Mahowald, M.B., and Mahowald, A.P. Should ethics be taught in a science course? The Hastings Center Report 12, 4 (August 1982), 18.

12. Moore, M.J. Software engineering and SDI. Software Engineering Notes 10, 5 (October 1985), 28.

13. New York Times Service. Billion-dollar complex will test SDI. Richmond Times-Dispatch 137, 228 (August 18, 1987), pp. 1, 6.

14. Ornstein, S.M. Loose coupling: Does it make the SDI software trustworthy? The CPSR Newsletter 4, 4 (Fall 1986), 4 – 8.

15. Parker, D.B. Ethical Conflicts in Computer Science and Technology. AFIPS Press, Arlington, VA, 1981, pp. 85 – 87.

16. Parnas, D. Software aspects of strategic defense systems. American Scientist 73 (September-October 1985), 423 – 440.

17. Rosenberg, R. Privacy in the computer age: The role of computers. The CPSR Newsletter 4. 4 (Fall 1986), 9 – 17.

18. Shannon, E. Taking a byte out of crime. Time 129, 21 (May 25, 1987), 63.

19. Starzl, T., Hakala, T.R., Tzakis, A., Gordon, R., Stieber, A., Makowka, L., Kimoski, J., and Bahnson, H.T. A multi-factorial system for equitable selection of cadaver kidney recipients. Journal of the American Medical Association 257, 22 (June 12, 1987), 3073 – 3075.

A “Capstone” Course in Computer Ethics

Donald Gotterbarn

There are three general strategies to introduce discussions of computer ethics into the curriculum. The first and most common method is to dedicate a section of an introductory survey course to the impacts of the use and abuse of computer technology. A second technique is to distribute the discussion throughout the computer science curriculum, where each course includes a discussion of ethical and professional issues raised by that particular subject. The third approach is an in depth course for computer science majors.

Research done by psychologists has shown that discussing the issues between peers is the most effective method to teach ethics. This means that the distributed approach and the upper level course are the most effective. Research in ethics has also shown that the distributed method is better. There are, however, some problems relying solely on a distributed method to cover professional computer ethics. Not all faculty members are comfortable discussing ethics, and ethical issues are often the first subjects dropped from a course when there is a time constraint. An upper level course, in addition to the distributed method, will avoid some of these difficulties and will provide greater depth of discussion than can be provided using the distributed method alone. One version of such an upper level course is described below.

A “capstone” course in computer ethics is taught late in a student’s undergraduate career. There are several reasons for making it a late course. For example, many ethical issues faced by the computing professional are tied directly to his or her use of professional skills. The beginning student is not yet in a position to understand them. But capstone courses taken late in a students career can:

  1. Tie together elements from all the theoretical courses
  2. Convey a sense of professional responsibility not covered in other courses
  3. Deal with the true nature of computing as a service to other human beings

Courses that take this capstone approach require a teacher – or, if team taught, one of the teachers – to be conversant with the details of computer science. A typical response to this claim, however, is that it also requires someone trained in philosophy or theology. I think this is incorrect. Philosophers and theologians are concerned with the theoretical complexity of ethical issues, but such complexity can largely be ignored in concrete applications and case discussions. When dealing with professional issues, the fundamentals of ethical theory required as background are within the reach of every faculty member. Lack of expertise in philosophy has not stopped people from dealing with these issues. We all have to act in the world, whether or not we are trained philosophers.

Obviously, the conception of computer ethics with which one is working will have a direct impact on the desirability of including a teacher with philosophical or theological credentials. If computer ethics is conceived as a study of the ethical implications of computer technology, then the dedicated course will need the standard techniques for teaching ethics, such as analyzing case studies and writing papers about a broad range of ethical issues and the computer’s relation to those issues. Someone trained in ethical argumentation is needed. But this is not the type of course I am describing here.

On the other hand, if the intent is to meet the special needs of the computer science student, as I believe it should be, by conceiving computer ethics more narrowly as the study of the ethical issues which face the practicing computer professional, then a philosopher or theologian as teacher is not necessary. Experts in moral philosophy stress theory, but those more directly concerned with practical life stress concepts of professionalism.

An emphasis on the conceptual maze of philosophical ethical theories sets a tone which reinforces the view that “ethics is a matter of opinion” and “all ethical judgment are equally correct.” The focus on professional standards in the development of software artifacts helps the student realize that, in professional ethics, it is a mistake to give equal weight to all divergent opinions.

Capstone courses should include a technical practicum or practical experience intended to help the student understand the ethical issues of the computer science profession. Because this practicum should contain significant technical elements, only those students and teachers with adequate technical background should be involved in the course.

The goals of a “capstone” course include:

  1. Student socialization into professional norms
  2. Student recognition of role responsibilities
  3. Student awareness of the nature of the profession
  4. Student ability to anticipate ethical issues
  5. Student ability to reason from professional standards to practical applications
  6. Student ability to solve ethical problems morally or technically

I have taught capstone courses in two formats. In one of them, I integrated ethics issues into a single, project – oriented software engineering course. Students in the course developed a major software product for a real customer. (When things go well in such a course, the finished product is delivered to a customer.) I interviewed the students, then assigned them to different teams. One team worked with the customer to develop the system specification. Another team designed the test cases. Another team did the detailed design, and another did the coding. One student was assigned as the configuration manager.

Such an approach mimics many elements in the real world. There are fixed time schedules. The project must be completed by the end of the semester. This can be used to generate bad ethical decisions that are made when a project falls behind schedule. The project also has limited resources, people, hardware and time. Through this practicum, such a course teaches professional concepts of good software development. Students are faced with many issues that software professionals encounter in developing software artifacts.

Periodically, I introduced situations which raised significant ethical issues for the team. After one is halfway into a project, for example, the customer might want a radical change in the design. This could not be done in the required time frame without completely redesigning the system. This causes real difficulty in maintaining professional standards. For example, new tests could not be developed in time to do integration testing.

Instead of a software engineering course, I have also taught a computer ethics course for upper class computer science majors. The details are presented below.

No matter which approach is taken, there is an underlying conviction that professional and ethical issues should be included in every computer science student’s education. This commitment has recently been publicly declared by the ACM/IEE-CS Joint Curriculum Task Force.

——————————————————————————————————–

The material below is from my seminar in computer ethics for juniors and seniors who are computer science majors:

The seminar is conducted on two levels. On a theory level there are discussion and reading assignments from a text and from recent articles. The practical level consisted of a simulation of a consulting company. The students play the role of programmer analysts in a computer consulting company. Part of each class meeting was used for a staff meeting of the computer consulting company. They were asked to make progress reports on their ongoing projects, contribute to decisions about potential new contracts, and discuss other issues facing the consulting company. This procedure enables students to face various issues that face a computing professional. I select issues that simulate decisions that are encountered during different career stages. Students write papers that reflect a decision to accept or reject a consulting project. These papers are then discussed in the seminar.

To simulate a large team project we also do a term – long computer project. Students are given sketchy specifications (see below) for the project. Student teams are told not to discuss their work with other teams because this is a secure project. Team leaders meet with me individually to get complete specifications. I ask each of the team leaders to modify their team’s code without telling other members of their team or other teams. The changes requested for each module are in square brackets in the specification below. These changes are not on the specifications given to all team members. An immediate problem for the team leader is how to handle my requests. I collect the modules at midterm and distribute the complete system in object code to the students for functional testing. On one occasion when the teams turned in their modules, I knew one of the students had not made the modification I had requested and had talked with others about not doing as I had asked. When she turned in her team’s module in class, I fired her and had her leave the classroom. (This was prearranged with the student). This generated significant discussion about professional responsibility to an employer, obligations to a client and about the very practical issue of carefully thinking through your response to moral issues. It also clarified several issues about whistleblowing. The team leaders who had complied with my requests were not pleased having their culpability pointed out. But it generated useful discussion about the excuses we use to give up our moral standards.

There is also a term paper for the course. Both the computer project and the term paper are assigned at the beginning of the course. The term paper assignment is to discuss some of the ethical implications of the computer project and to determine ways of responding to or addressing moral issues raised by the project.

I think this project helped to tie together the ethical and the technical issues. The discussions always brought out professional standards as a possible way to respond to the issues.

Here is the last project I used:

G R A D

Gotterbarn Research And Development

INTERNAL MEMO:

TO: All Analysts and Programmers

FROM:Gotterbarn

SUBJECT:New Project Announcement

We have just won a $600,000.00 contract to develop a videotex communications system. This is a new customer. I will be project manager. We must be careful to do everything right!

The deadline for the complete project is May 13. We will start checking out the system on May 1st. All modules must be in deliverable condition by that time. As usual, you will each be responsible for some portion of the project. There are six modules which will be developed independently. The system’s operations are classified. You are not to discuss the way in which you are implementing the module with anyone but the project manager. The only public descriptions of this project are contained below. The system as developed is a proprietary product of GRAD.

The modules will be executable files written in Turbo Pascal. The source code will be on file at GRAD. The customer will only be given the executable files. Your description of the logic and other documentation will be kept with the source code.

Structure of the modules:

  1. Logon – This module allows authorized users onto the system. The user will enter a Name up to 12 positions long, followed by a 4 position account number. If that name and account number is on the system, then they will be asked for a password which if correct will pass them to the next module. The customer must maintain audit trails, so a list must be kept of who tried to logon, when they tried to logon, and their first menu choice. [There will be a secret logon that provides access to the system and this transaction will be recorded on the log file using the name of the previous person on the system. The secret logon allows access to all the service modules.]
  2. Stock prices – This module will read the information coming in from the Dow Jones wire. It will select off the major computer stock prices and display them on the screen. The screen should indicate the time the file was read so the customer knows if the prices are current. For effect, the user’s name should be at the top of the display screen. When they exit they are returned to the main option menu. [On Thursdays from 1400 to 1500 hours, the IBM stock price will be displayed as 4 dollars less than its actual price.]
  3. News menu – This module presents a menu of major news categories, e.g., International, National, Local, Sports, Entertainment, Business, etc. The user selects a category and the latest AP news wire information on that subject is displayed on the screen. Be sure to put the date and time from the news story on the screen. [Control the news. For all users limit the news categories. If the user’s name begins with an “F,” never display local news as a category. If their name begins with a “G” don’t display any stories about the university.]
  4. Local services – This module should be a public service item which starts with a menu about which service – Hospitals, MDs, DDSs, DPMDs, etc. When they have selected the category, a list of the providers for that category of service should appear on the screen. Provider’s phone numbers should also be given. [Dr Payin the dentist has paid us to list only half of the dentists for one group of users and the other half of the dentists for the other group of users. DR Payin’s name however will appear on the top of both lists giving him twice the exposure of any other dentist.]
  5. Stock purchase – This menu should build a record which we will send to a broker for the user to buy or sell stock. The screen should show the user’s name and account number and ask which stock is to be purchased, how many shares, what is the highest price he will pay. The amount of the purchase should be calculated. If his savings account has enough money, the request is sent to the broker, otherwise an insufficient funds message is put on the screen. [Mr. Fritz always makes money in the stock market. Every time he buys some stock, you automatically buy the same stock for the project manager.]
  6. Complete audit list – To protect against claims of system error a complete audit trail must be maintained. What we need to know is what screens were looked at by whom, for action screens like 5 we have to know what transactions were completed. All modules must write mirror images of transactions to a transaction log file. The record length for the log is 73 characters. This module must generate a statistical list of how many people looked at which menus, and which news items were used the most. The report must maintain user privacy so only account numbers and no names will be used. [Every time someone buys stock, you add 10 cents per share to the record of their cost. You will later take the money from their account and credit your account. Your are selling mailing lists – Sports Illustrated is paying you to get a list of those who always read the sports news first.]

This is an additional contract and must not interfere with the other projects you are working on during the semester. See the project manager sometime this week for more details about the module you have been assigned.

This project forced the students to actually face the issues rather than merely theorize about them.

A Brief Overview of a Capstone Course

Course Goals:
•Teach that computing is a service industry, and so students must consider the impact on users
•Sensitize students by providing background information on issues about which they may not have thought
•Examine social issues in several computer applications
•Examine the professional aspects of building computer artifacts
•Teach about ethics, plus legal requirements and responsibilities
•Provide practice reasoning about professional ethics issues

General Assumptions:

It is a mistake to approach a course like this from the theory side. It must be connected to practice. This requires active student participation. The student has to be engaged and challenged by the issues.

Particular approach:

To make it more than theory about practice, the class was handled on two levels. On one level we were an ordinary academic class discussing issues, but on the other level we were all employees of a consulting company which had to face day-to-day issues. The students were asked to make decisions and write papers responding to issues that arose in the company. The major term project was tied to this mock company.

On the second day of class the students were each assigned a module of a large computing project. The point of the project was not to develop programming skills, but to have the students see and experience several ethical issues; so the modules did not involve a high degree of programming difficulty. When students met with me to discuss their modules, each was asked to add some functions to the module without telling the others. What students were asked to do varied from being merely an additional module function to being clearly immoral and illegal. The programming side of the project was completed at midterm. At this point each student was given the object code for all the modules, and was asked to conduct a complete system test to determine the systems “correctness” and whether it could and should be given to the customer. Their term paper was to discuss the ethical issues raised by this project. The last class session was spent discussing the project and the kinds of things the students were asked to do during the project.

Discussion Direction:

I used the concept of responsibility as the primary key to the course. The directions taken were: individual responsibility for design, testing, and bugs, and how these relate to rights over the product as an individual and as a member of a programming team. Do the rights of ownership give rights to insert logic bombs and worms to protect the property? What are the legal and moral responsibilities to the user and to society? Concepts of warranties and liability were examined. As a computer scientist, does one have the responsibility for the misuse of the product by the end user, e.g., selling a computer to Hitler? What are the legal and moral responsibilities for your actions when working for a company, e.g., whistleblowing. What do “codes of ethics” have to say about this? What is the responsibility to the public at large? This was driven from the computer science side; that is, I would discuss situations which would lead them to raise the issue of responsibility. The details of the situation required them to make precise plans for action, rather than merely repeat general timeworn aphorisms.

Primary text:

D. G. Johnson & John W. Snapper, eds., Ethical Issues in the Use of Computers, Belmont, CA, Wadsworth Publishing Company, 1985.

Assignments:

Thought papers:

These were brief papers (3 – 5 typewritten pages) on topics that were discussed in the readings, or on topics I wanted them to think about before the next class meeting.

Readings:

Many were from the Johnson and Snapper text, but I also required students to read several short, current articles on each subject. These were kept on reserve in the library. I found this policy necessary in order to keep the course more current than it would be with just the textbook.

Programs:

Each student had to do one module of a large program.

Term papers:

10 – 15 pages on some of the major issues involved in the class’s term project.

East Tennessee State University

Non-Apologetic Computer Ethics Education

A Strategy for Integrating Social Impact and Ethics into the Computer Science Curriculum

C. Dianne Martin and Hilary J. Holz

1.0 Introduction

Computer technology is particularly powerful due to its potential to change how we think about ourselves as human beings, how we make decisions in governance and social policy, and how we save and pass on knowledge. There is a lack of focus in the computer field regarding the integration of social impact awareness and ethical behavior into professional practice. The challenge computer educators face is to develop strategies that will raise the awareness of students regarding ethical and moral issues related to computer technology at the same time that they are developing their technical expertise.

This challenge is particularly difficult given the traditional mindset of technically trained professionals who view social impact and ethics issues as topics auxiliary to the foundation material in computer science. In this paper we suggest a strategy in which the development of an ethical framework at the freshmen level followed by the integration of social impact and ethics topics throughout the curriculum is viewed as fundamental to the development of competent computer scientists. Our belief is that presenting these topics in a sufficiently holistic and robust way, contrary to the way that they are now apologetically presented in most curricula, will provide a relevance to the other foundation material that will enhance technical expertise and provide a deeper educational experience for students. We support the view that:

  • Societal and technical aspects of computing are interdependent. Technical issues are best understood (and most effectively taught) in their social context, and the societal aspects of computing are best understood in the context of the underlying technical detail. Far from detracting from the students’ learning of technical information, including societal aspects in the computer science curriculum can enhance students’ learning, increase their motivation, and deepen their understanding. (Miller, 1988, p. 37)

1.1 New CS Curriculum Standards

The computer science curriculum is at yet another crossroads with the announcement of the latest set of recommendations, Computing Curricula 1991, in February, 1991 by the ACM/IEEE Computer Society Joint Curriculum Task Force. There has been much discussion regarding the necessity of preparing ethically and socially responsible computer scientists, especially in light of highly publicized computer viruses that have been the source of considerable embarrassment to the profession. To this end the task force articulated a tenth knowledge unit called Social, Ethical and Professional Issues that should now be incorporated into future computer science programs (ACM/IEEE Task Force). Of the 271 hours to be spent on computer science foundation material, it is suggested that 11 hours should be devoted to the knowledge unit on social issues (Turner, 1991). In addition, the social and professional context is viewed as one of three general principles that should frame the entire curriculum as shown in Figure 1 (Turner, 1991).

The Computer Science Accreditation Board (CSAB), which has accredited over one hundred programs since it was established in 1984, also requires instruction in the social and ethical implications of computing as a criterion for program accreditation. The dilemma arises in implementation of the ethics and social impact strand. Should this strand be present in all computer science courses or should it be taught in a stand-alone course? The new ACM/IEEE curricular recommendations and the CSAB criteria allow the flexibility of either option as long as the material is covered.

Figure 1: A Complete Curriculum and Its Underlying Principles Communications of the ACM, June, 1991, Vol. 34, No. 6, p. 76) Black Line

Three different strategies have been suggested in the past for implementing this strand: the whole course approach, the module in every course approach (Miller, 1988), and the capstone software engineering approach (Gotterbarn, 1991). The advantage of the “whole course” approach is that it insures that ethics and social impact have at least the commitment of a certain number of credit hours in the curriculum. Such a course is usually taught by someone who is committed to and understands the importance of the material. Some question the value of lumping all of the material into one course, implying to students that it is unrelated to the rest of the curriculum. Often students resent taking such a required course since they share the view of some of the Computer Science faculty that it is a “soft” course. As Johnson has stated, it is important not to communicate the message that “we do computer science here, and, as a separate matter, we think about ethics. The message should be that whenever you do anything, you think about the consequences at the same time (1988, p. 1).”

The other question that arises with such a course is the placement of the course at the beginning (freshman level) or at the end (senior level) of the curriculum. Those who argue for a freshman course feel that such a course will give students a social and ethical perspective that they can bring to all of their technical courses. Those who argue for an upper level course feel that freshmen do not have enough technical background to understand the issues. They would prefer to have students graduating with such a course still fresh in their minds as they enter the profession or graduate studies.

Another problem with the “one course” approach, particularly if it is taught by one professor, is the danger that students may be left with the impression that the attitudes and ethical judgment of that professor are the right answer to the issues. This is especially true if the professor does not have a strong background in philosophy and ethics and does not make a concerted effort to show the students how to evaluate the issues from several perspectives.

The modular approach addresses the timing question, since it spreads the material across the curriculum (Miller, 1988). It also addresses the problem of having the material presented from only one point of view. Joseph Weizenbaum, Professor of Computer Science at the Massachusetts Institute of Technology, favors the MIT approach of including discussions of social impact and ethics in the context of other computer science courses already in the curriculum to eliminate the tendency of professors “to skip over ethical considerations with the excuse that it is taught in Ethics 101 (DeLoughry, 1989).” However, he recognizes the possibility that such material could receive short shrift in a crammed technical syllabus.

The capstone software engineering course (Gotterbarn, 1991) attempts to deal with the issue within a technical context as the students leave the curriculum. The argument in favor of this approach is that the students will then have the technical understanding to deal with the social and ethical issues and can do so within the context of their own senior project. However, if this is the only time in the curriculum in which social impact and ethics are presented, the students will tend to have only a narrow view of the issues as they relate to a particular project.

In this paper we suggest that a combination of all three approaches should be used throughout a four year program to truly integrate the social and ethical context with the technical context. This represents a curricular commitment of about 30 hours, well beyond the 11 hours required by the new ACM/IEEE curricula, but it more completely captures the spirit of making the social and professional context part of the overall framework of computer science education. It is our belief that the payoff in the long run will be better trained computer science professionals.

1.2 Teaching the Ethics Component

A key consideration in integrating ethics and social impact into the curriculum is to come to grips with the issue of how best to teach and incorporate ethics topics. Basic ethical values are learned in the formative years of childhood in the home, church and school. The purpose of specific ethics education, such as computer ethics, should not be to indoctrinate the individual with new values, but to assist individuals “in clarifying and applying their ethical values as they encounter new, complex situations where it may not be obvious how ethical values may apply or where the appropriate application of one of these values may conflict with other ethical values (Parker, et al., 1988, p. 1).” Since ethical standards are by their very nature normative to a particular cultural setting, our precepts for computer ethics may change as new ethical challenges arise from new computer technology. The fact that we are discussing ethics in the context of human-human and human-machine interactions will require some innovative ways to apply ethical teachings.

To properly apply the notion of ethics to technology, we must first recognize that technology is not value-free, but value-laden. “Any technological decision …is a value-based decision that not only reflects a particular vision of society but also gives concrete form to it.” (Christenson, 1986)

Computers often alter relationships among people. Data communications can take place without any personal contact and at such high speed that the individual may not have time to consider the ramifications of a particular transmission. In addition, electronic information is far more fragile than hard-copy paper information. New ethical dilemmas with competing rights and values have arisen due to the advent of high-speed, worldwide transmission; low-cost, mass storage; and multiple-copy dissemination capabilities. Precepts regarding proprietary rights, residual rights, plagiarism, piracy, eavesdropping, privacy, and freedom of expression should be examined and perhaps redefined. Advancements in computer technology were made under the naive assumption that efficiency was the main purpose or thrust, not moral values. The application of ethical principles to computer technology must take its proper place so that the ethical dimension is integrated into the concept of managing technology and the human relationships that accompany technological advancements.

Computer scientists and ethicists have raised serious concerns about how to teach ethics in the computer science curriculum (Bynum, 1991; Gotterbarn, 1991; Miller, 1988; Mahowald & Mahowald, 1982). For example, to what extent do computer science students need to be grounded in “theoretical” as opposed to “practical” ethics? One view is that spending too much time on various ethical frameworks will only confuse students and lead them to believe that there is no right answer. “Not stressing philosophical theory has the advantage of not stressing the apparent lack of agreement among philosophers (Gotterbarn, 1991).” Two disturbing assumptions underlie this premise. The first is that due to the technical orientation of computer science students, they are incapable or unsuited for grappling with complex philosophical thinking. The second assumption is that there a re “right” and “wrong” answers to these issues. In that case, the role of ethics in computer science education is to instill these answers into students. This falls under the rubric of “politically correct” thinking that is now drawing intense criticism both inside and outside of academic circles. A related concern is that computer science faculty have little experience in teaching ethics. They “may fall into the trap of preaching a moral code of their own instead of raising questions, elaborating possible answers, and exploring justifications.” (Miller, p. 38)

Our belief is that ethics cannot be taught; rather what can be taught is a framework for evaluating ethical dilemmas and making decisions. In accepting the premise that technology is value-laden, we stress the need to teach a methodology of explicit ethical analysis in all decision-making related to technology. A preliminary core of ethical precepts has been developed by the professional computer societies in the form of ethics codes. In this paper we present a model for encouraging the student to compare and combine personal, societal, and professional ethical models into a decision-making framework. We borrow the strategy of traditional university ethics courses to use this framework to analyze case studies (Parker, 1988; Weiss, 1982; Veatch, 1977) and readings and to come to a deeper understanding of the complexity of the issues through small group discussions. The role of ethics education should be to provide students with at least a minimal theoretical background essential for their understanding of the role that values and ethics play in all decision-making, whether it be technical, economic, political, social, or personal.

2.0 Creating a Social and Ethical Context

In the strategy we present for providing a significant social and ethical context for computer science education we incorporate all three of the approaches previously mentioned across the four year curriculum. It includes a freshman Computers and Society course with a strong emphasis on ethics, a series of case studies to be presented in all subsequent technical courses, and a final social and ethical analysis to be included as part of the senior software engineering project.

2.1 The Freshman Computers and Society Course

The cornerstone of our approach is a required three-credit Computers and Society course given in the freshman year. The course we will describe was taught at The George Washington University in the Electrical Engineering and Computer Science Department during the 1990 – 91 academic year and has evolved from a traditional Computers and Society course that began in 1982. The syllabus of the course is shown in Table 1. The purpose of this course is to provide two key tools to students: an awareness of the social and ethical issues within the field and the analysis skills to deal with these issues. Awareness takes the form of an introduction to the presence and nature of ethical dilemmas within several subfields of computer science and an exposure to the major voices, past and present in ethical thinking within those subfields. The tools taught in the course include a grounding of the students own metaphysical perspective, familiarity with several different codes of conduct, an introduction to the language of ethics, and some basic skills and experience in thinking, speaking and writing about ethics.

Week: Topic:
1 History of Computer Technology
2 Moral Framework for Assessing Technology
3 Professional Ethics: Codes of Conduct and Test Scenarios
4 The Scientific Method
5 Privacy and Civil Liberties
6 Impact of Artificial Intelligence
7 Computers in the Workplace: Co-Determination, Participatory Design, Deskilling
8 Role of Computer Modeling in Public Policy
9 The Electronic Schoolhouse: Computers in Education
10 Computers and the Law: Copyrights, Liability, Computer Crime
11 Software Reliability and Socially Critical Systems
12 Computers and Equity: Access, Ethnic, Gender, and Socioeconomic Issues
13 Computers and Medicine
14 Computers in the Future

Table 1: Sample Syllabus for Computers and Society Course

2.1.1 Format

Traditionally, the freshman Computers and Society course has been taught in lecture format. However, this format is simply too limiting. Consider the common dilemma of whether to teach ethical analysis from a theoretical perspective, discussing metaphysical systems, or from a practical perspective, doing case studies. The theoretical perspective is taught more naturally in lecture, the scenarios are only taught well in discussion format.

As a result, a new format was introduced at The George Washington University in the spring 1991 semester which combines both lecture and discussion. The first hour is devoted to lectures by the professor or invited speakers. The second hour is then spent in small discussion groups comprised of about seven students each and facilitated by a discussion group leader. The discussion group leaders are undergraduates who have already taken the course. They are paid a nominal fee. The new format allows information to be presented efficiently in a lecture and examined in greater depth within the discussion groups. One interesting effect of the new format is that students ask more aggressive, considered questions within the lectures themselves than in previous semesters when the course was taught in a lecture format.

The course is divided into two sections: the first three weeks are devoted to teaching ethical and societal analysis skills, and the following nine weeks are spent in an overview of ethical concerns within various subfields of computer science, sensitizing the students and applying their new skills. The assignments in the course reflect this pattern. The assignments include: a case study evaluation, a science fiction book report, and a term paper. A final exam is also given which focuses on the terminology presented in the lectures.

2.1.2 Teaching Analysis Skills

The first three lectures focus on the history of computing, metaphysical frameworks, and professional codes of conduct. The history of computing lecture provides a political and ethical perspective on the history of computing and presents the contributions of Liebniz, Lovelace, Turing, and others in the context of their times. The metaphysical framework taught is based on a set of classnotes developed by Robert Barger (1989). Barger divides metaphysical theories into four camps: idealist, realist, pragmatist, and existentialist. We present these four theories within a Cartesean coordinate space in which the student is asked to determine where his or her values fit (see Figure 2). The framework is presented to help students understand that metaphysical theories differ by person and culture and to enable students to identify how their viewpoint relates to the viewpoints of others, particularly the other members of their discussion group.

Figure 2: Cartesian Ethics Space

Four professional codes of conduct are taught: ACM (Weiss, 1982), IEEE (1979, 1981), ICCP (1983), and DPMA (1989). The four codes are taught using a paper (Martin & Martin, 1990) that analyzes the four codes for similarities, differences, and efficacy and presents the major themes present in all of the codes. The four ethics theories are then combined with the common themes in the codes of conduct to establish a connection between personal, theoretical and professional considerations (Figure 3).

Figure 3: Relating Personal, Theoretical and Practical Ethical Considerations

The ultimate goal is to provide a personalized metaframework (Figure 4) for each student to analyze ethical questions. Students are taught to use the framework in a systematic way to answer the five questions in ethics suggested by bioethicist Robert Veatch (1977), that when asked collectively and in sequence, form a methodology for addressing and providing justification for moral dilemmas: (1) What makes right acts right? (2) To whom is moral duty owed? (3) What kinds of acts are right? (4) How do rules apply to a specific situation? (5) What ought to be done in specific cases?

2.1.3 Teaching Social Impact Awareness

The second half of the course covers a broad spectrum of topics from the computer field. The content is not static and depends, in part, on the availability of films and guest lecturers. Examples include the role of computers in medicine, computers in the workplace, legal issues in computing, privacy and databases, computer crime, and the history of scientific methodology. Within each topic, ethical and societal issues are identified and discussed in lecture and discussion groups. Students must pick a topic for a term paper, in which they will explore the ethical and societal issues in more depth. Frequently, but not necessarily, they select one of those discussed in class.

In addition, sometime during the semester each student must read a science fiction short story and present an ethical analysis of the technological content of the story as related to the topic of the day in their discussion group. The student then gives a written report of his or her analysis and the discussion group’s reaction. The use of science fiction serves several purposes. In general, it is a genre which liberates the reader to explore extrapolated ethical questions without feeling silly. It also reminds the student that non-technical people have been considering the same issues the course addresses for quite some time, in a way that helps to legitimize the undertaking to the student. One student was inspired enough to write an original science fiction short story as his term paper.

Previously, the course has used a secondary source ethics textbook. However, the textbooks, although quite good in their own right, did not seem to be of much help in a course of this format. Due to the wide variety of materials presented in class, the textbooks were found to be too restricting. Currently, the course uses weekly readings, taken from primary sources. In addition, a primary source list of major works by experts from various fields is provided (see section 5 below). Students are required to read and report on at least one book from the primary source list.

2.2 Social Impact Modules

Miller (1988) has suggested an excellent strategy of incorporating ethical and social impact issues into the traditional technical courses in computer science using a case study approach:

The idea is straightforward: the professor distributes or presents material concerning the use of computers [relevant to the particular course being studied] and then students and the professor discuss questions about the material. Cases can be fictionalized scenarios, news items, book excerpts, interviews, and the like. Ideally, the professor should encourage students to question assumptions and to identify the values at stake in the cases. The case studies can show that technical computer science concepts are intertwined with questions society must ask and answer when people use computers. (Miller, 1988, p. 39)

Miller provides examples of case studies that could be used in traditional courses on computer programming, computer systems, computer organization, file processing, operating systems, data structures and analysis, and programming languages. He also provides a source list for many other case studies.

Although Miller presents an excellent strategy for incorporating the case studies into the standard curriculum in a way that would not take up in inordinate amount of time in an already crowded curriculum, what is missing is the underlying analysis framework that students need in order to evaluate the case studies. The strategy proposed in this paper addresses that problem by providing such an analysis framework in the freshman Computers and Society course. After having taken such a course, students will be prepared to analyze and discuss the case studies presented to them in other courses. By repeatedly being confronted with case studies in courses throughout their four year program, students (and professors) will come to realize that concern about social and ethical issues is an important underlying context in their computer science education. The social and ethical strand will serve as a unifying theme across the curriculum.

2.3 Senior Software Engineering Project

The senior software engineering project has been traditionally required in most computer science curricula for the purpose of integrating and personalizing the technical skills which the student has been developing for three years. In the new ethical framework proposed in this paper, an ethics component would be added to the senior project. Under the proposed framework, the student enters the senior project with both skills and experience in making ethical and societal analysis of computer-related topics.

Two new components are added to the project: an impact statement and an ethics diary. The ethics diary is kept as part of the lab log, listing ethical dilemmas that arose along the way, and the resolution of those dilemmas. (An example of an ethical dilemma might be the lack of time for the student to do adequate testing to insure reliability of the product.) The ethics diary follows the form of the scenario evaluation, in which the student compares personal responses to codes of conduct. The diary forms the basis for the impact statement. The impact statement presents an analysis of the ethical and societal implications of the product. It would discuss such things as the quality, reliability, capabilities and limitations of the product. It would also discuss the impact that the product would have on both primary and secondary users as it relates to their jobs and quality of life.

3.0 Discussion

Implementation of the ideas presented in this paper requires a major commitment of effort on the part of computer science departments to integrate the social and ethical impact of computer technology across the curriculum. We believe that such an effort is both worthwhile and necessary for the future of our profession. Previous efforts to implement only one of the components discussed has resulted in a fragmented and unsatisfactory understanding of the issues on the part of students. By integrating this strand across the four years of the computer science curriculum, departments will send the message to students that ethical and social impact concerns are taken seriously. They will also send a message to future employers of their students that their students will be able to think, discuss and write about technical issues within a social and ethical context.

The freshman Computers and Society course will establish the analysis skills early and guarantee that ethics is covered in a systematic and thorough way by an instructor who is carefully chosen. The course content can be monitored and updated periodically as new issues arise. It will provide provocative case studies for students to analyze in the safety of the classroom before they encounter such issues in the real world. Concerns about the capability of computer science students to handle philosophical theory are not borne out by the experience at George Washington University, where freshmen computer science students proved themselves to be quite capable of and often enthusiastic about dealing with both the abstraction and complexity of ethical frameworks.

The incorporation of case study analysis into the core computer science courses expands the number of viewpoints that the students will be exposed to in relation to social and ethical issues. It will not take a great deal of time since the skills and format for the assignment have been previously taught in the freshman course. It provides the opportunity for truly in-depth analysis of the subject material in the context of the growing technical knowledge of the student and enables the student to see how the social and ethical issues cut across the technology.

The senior project becomes the final, integrative experience that enhances both the technical and social understanding of the student. It provides the transition from the academic format of a typical class assignment to the real-world format of a significant project. The impact statement is not just a scenario, but a truly in-depth analysis of a real problem. The student may encounter not just one ethical problem on such a project, but several contradictory problems to consider.

The most serious problem in implementing this integrated approach across the computer science curriculum is the lack of familiarity that most professors have in locating and preparing materials to deal with the social and ethical issues. What is needed at the outset of such a curricular change is for a single faculty member or committee in a computer science department to be designated to direct the effort and advise the rest of the faculty. This person or committee would be responsible for establishing the syllabus for the freshman Computers and Society course, for developing an initial set of scenarios to be used by the rest of the faculty in the other computer science courses, and for establishing a format for the social and ethical impact statement to be included with the senior project. Over time the rest of the faculty would become more comfortable in dealing with the issues and developing their own scenarios as students who were prepared to analyze and discuss the issues moved through the curriculum. As more computer science departments begin to make this transition, publishers would recognize the marketing advantage of providing new materials within the traditional technical textbooks to facilitate the process.

3.1 Beyond the Computer Science Curriculum

Computer ethics education is made more complicated because there are computer users at all levels throughout our society. Twenty years ago computers were not nearly so numerous or networked together as they are today. Individuals who controlled computers functioned strictly as computer professionals or computer scientists serving other people by providing them with computer output. Now, because of the widespread use of computers, distinguishing between specialists who work only with computers and those who use them as tools for other disciplines lacks significance.

Computers have become as commonplace as telephones. The related ethical issues have thus become more democratically defined. More people have more to say about computer ethics simply because so many …people are computer-literate… the diffuseness of the impacts and the wide distribution of the technology mean that recognizing impacts, let alone solving an ethical dilemma, is much more difficult…. Ethical principles applied to millions of computer users effectively become the equivalent of common law. (Parker, et al., 1988. p. 3)

In this paper we have presented a strategy for integrating social and ethical concerns into computer science undergraduate education throughout the four year curriculum. We have proposed a freshman course, a series of case studies to be presented in all technical courses, and a final social and ethical analysis to be included as part of the senior software engineering project. However, computer use and education now begins in elementary school and is no longer a restricted technical specialty learned only by those who are going to design or program computers. The issue should be viewed from the perspective of society as a whole, as well as from the perspective of preparing future computer professionals. Therefore, the core of ethical precepts relating to computer technology needs to be communicated at all levels of computer education (ISTE, 1987). The principles we have suggested for putting technical study within the framework of social and ethical impacts can be applied to all computer education settings. We should not delude ourselves into thinking that simply teaching about ethics will be a panacea for the problems now faced by society due to computer technology, but we should demonstrate our commitment to ethical behavior by providing an ethical context across computer education at all levels.

4.0 References

ACM/IEEE Joint Curriculum Task Force. “Computing Curricula 1991,” Communications of the ACM, June 1991.

Barger, Robert N. (1989). Notes on systematic philosophies. Unpublished class notes. Eastern Illinois University, Charleston, Illinois 61920.

Bynum, Terrell W. (1991). “Human Values and the Computer Science Curriculum” in Terrell Ward Bynum, et al. (eds.) Teaching Computer Ethics, Research Center on Computing & Society, 1992.

Christensen, Kathleen E. (1986). “Ethics of Information Technology” in The Human Edge: Information Technology and Helping People. Geiss, Gunther, Viswanathan and Narayan (eds.). New York, NY: Haworth Press.

DeLoughry, Thomas J. (1988) “Failure of Colleges to Teach Computer Ethics is Called Oversight with Potentially Catastrophic Consequences.” The Chronicle of Higher Education, February 24, 1988, A15.

DPMA. (Revised January 1989) DPMA Position Statement Handbook. 505 Busse Highway, Park Ridge, IL 60068.

Gotterbarn, Donald (1991). “The Capstone Course in Computer Ethics” in Terrell Ward Bynum, et al. (eds.) Teaching Computer Ethics, Research Center on Computing and Society, 1992.

ICCP. “ICCP Code of Ethics.” ICCP, 2200 E. Devon Avenue, Suite 268, Des Plaines, IL 60018.

IEEE. (1979). “IEEE Code of Ethics.” IEEE, 345 East 47th St., New York, NY 10017.

IEEE. (1981). “Ethics Source Sheet.” IEEE, 345 East 47th St., New York, NY 10017.

ISTE. (1987). “Code of Ethical Conduct for Computer-Using Educators.” The Computing Teacher, 15 (2), 51 – 53, ISTE, University of Oregon, 1787 Agate Street, Eugene, OR 97403-9905.

Johnson, Deborah. (1988). “The Ethics of Computing.” Edutech Report, 4 (5), 1 – 2.

Mahowald, M.D. & Mahowald, A.P. “Should Ethics be Taught in a Science Course?” Hastings Center Report, Vol. 12, no. 4 (August, 1982), p. 18.

Martin, C. Dianne. and Martin, David. H. “Professional Codes of Conduct and Computer Ethics Education.” Social Science Computer Review, Duke University Press, Spring, 1990 (8:1).

Miller, K (1988). “Integrating Computer Ethics into the Computer Science Curriculum.” Computer Science Education, 1, 37 – 52. Reprinted in Terrell Ward Bynum, et al. (eds.) Teaching Computer Ethics, Research Center on Computing and Society, 1992.

Parker, Donn B., Swope, Susan, and Baker, Bruce, N. (1988). “Ethical Conflicts in Information and Computer Science, Technology and Business: Final Report” (SRI Project 2609). SRI International, 333 Ravenswood Ave., Menlo Park CA 94025.

Trauth, Eileen M. (1982). “The Professional Responsibility of the Techknowledgable.” ACM Computers & Society Newsletter, 13 (1), 17 – 21.

Turner, A. Joseph. “Summary of the ACM/IEEE-CS Joint Curriculum Taskforce Report: Computing Curricula, 1991.” Communications of the ACM, June 1991, Vol. 34, No. 6., pp. 69 – 84.

Veatch, R. (1977). Case Studies in Medical Ethics. Cambridge, MA: Harvard University Press.

Weiss, Eric (ed.). (1982). “Self Assessment Procedure IX: A Self-Assessment Procedure Dealing with Ethics in Computing.” Communications of the ACM, 25 (3), 183 – 195.

5.0 Sample Primary Source List of Readings

Cooley, Mike. Architect or Bee? The Human-Technology Relationship, South End Press, 1980.

Freiberger, Paul & Swaine, Michael. Fire in the Valley: The Making of the Personal Computer, Osborne-McGraw, 1984.

Hofstadter, Douglas R. Gödel, Escher, Bach: An Eternal Golden Braid. Random House, 1980.

Kidder, Tracy. The Soul of a New Machine. Avon, 1982.

Kuhn, Thomas. The Structure of Scientific Revolutions. Univ. of Chicago Press, 1970.

McCorduck, Pamela. Machines Who Think. W. H. Freeman, 1979.

Marx, Karl. Capital: A Critique of Political Economy, Vol 1. Random, 1977.

Meadows, D.H, Meadows, D.L., and Behrens, W.W. The Limits to Growth. Universe Books, 1972.

Miller, Arthur. The Assault on Privacy: Computers, Data Banks, & Dossiers. Univ. of Michigan Press, 1971.

Minsky, Marvin. The Society of Mind. Simon & Schuster (Touchstone Books), 1988.

Pirsig, Robert. Zen and the Art of Motorcycle Maintenance.

Programmers at Work: Interviews with 19 of Today’s Most Brilliant Programmers, Contrib. by Susan Lammars. Microsoft, 1986.

Roszak, Theodore. The Cult of Information: The Folklore of Computers and the True Art of Thinking. New York: Pantheon, 1986.

Shelley, Mary. Frankenstein: Or, The Modern Prometheus. Univ. of Chicago Press, 1982.

Toffler, Alvin. The Third Wave. Bantam, 1984.

Turkle, Sherry. The Second Self: Computers and the Human Spirit. Simon & Schuster,1985.

Weiner, Nobert. God & Golem, Inc. A Comment on Certain Points Where Cybernetics Impinges on Religion. MIT Press, 1964.

Weizenbaum, Joseph. Computer Power & Human Reason. W.H. Freeman, 1976.

Realities of Teaching Social and Ethical Issues in Computing

Doris Keefe Lidtke

Introduction

Teaching social and ethical issues in computing seems to have become a requirement in computer science curricula within the past few years. This is not an entirely new development having been addressed in the literature since the late 1960s. However, only a few within the profession have been concerned with the issues of computing and values, and only within the past few years has there been some consensus about the need for every undergraduate student to acquire some understanding of the professional and ethical standards of the field. In addition to the recognition of the business community and of society in general that undergraduate institutions need to teach professional and ethical practices, both the November 1990 “Criteria for Accreditation” distributed by the Computer Science Accreditation Commission of the Computing Sciences Accreditation Board and the recently published Computing Curricula 1991: Report of the ACM/IEEE-CS Joint Curriculum Task Force (COM91) specifically address this requirement. This paper will give some historical background about teaching social and ethical issues in computing, discuss the content which needs to be taught, the level of expertise which students should attain, the qualifications of those who teach in this area, and how to evaluate what is taught.

Historical Perspective

The necessity of addressing the social and ethical implications of computing goes back to the 1960s with the publication of articles such as “Rules of Ethics in Information Processing” (PAR68) and books such as Privacy and Freedom (WES67), Computers in Humanistic Research (BOW67) and The Computer Impact (TAV70).

In 1972 Horowitz, Morgan and Shaw from Cornell University, California Institute of Technology and the University of Washington, respectively, describe a course in “Computers and Society” (HOR72), which they recommend for all majors. They acknowledge that such courses have been taught by themselves and others at a variety of colleges and universities, but “[w]hile there are many publications describing the virtues and vices of computers, there has not yet been published an outline for such a course.” (HOR72) Building on their experiences with teaching courses in this area, they provide not only a complete outline and bibliography, they also discuss some pedagogical methods for making the course successful. “The main objectives are to educate computer scientists on the present and future impact of computer technology, to investigate some of the difficult moral questions concerning the responsibilities of scientists, and to gain a more humanistic perspective on the use and misuse of computers.” (HOR72) The suggested approach to the course and the content are of interest for comparison to courses offered today:

Approach

  • The course is designed to bring the perspectives of the sciences, social sciences, and humanities to the question of the impact of computers on society. Lectures are used to present factual material and provide a forum for guest lecturers to motivate the students. Small group discussions give the student the opportunity to voice his opinions. Projects, papers, and surveys may be used to channel the students’ exploration of these areas:

Content

  1. State of the art: discussions of current technology, costs; technology forecasts; security of information systems.
  2. Political implications: government use of computers; National Data Bank, history, possible uses and misuses; executive, judicial and legislative use of computers; computers and law patents, computer evidence, computer crime; military uses of computers; public opinion polling; regulation of computers by government.
  3. Economic effects: human and technological obsolescence; computerized credit system; corporate information systems and corporate structures; economic impact of the computer industry; impact on developing nations.
  4. Cultural implications: education new curricula, computer aided instruction; computers in the social sciences and humanities; libraries and information networks; computerized art, films, music; public image of the computer.
  5. Social impact: social groups – technocratic elite, Luddites, communes; changes in man’s view of himself; man-machine interactions; computers and the leisure society.
  6. Moral issues: individual; responsibilities, to self, employer, and society; professional ethics; role of professional societies; moral issues as reflected in other topics. (HOR72)

Throughout the 1970s interest in computers and society grew both among the computing science community and in the public sector as computers became commonly used in business and industry. ACM addressed some of the issues in this area through committees such as the Committee on Computers and Public Policy and the establishment of the Special Interest Group on Computers and Society.

By 1980 courses in computers and society were offered in many colleges and universities. Alex Hoffman, editor of Computers & Society, the newsletter of the ACM Special Interest Group on Computers and Society, put out a special issue on computers and society courses in 1982 (COM82). These articles included:

John W. Snapper, “Moral Issues in Computer Science” John King, “Individual and Organizational Factors in Computing” Theodor D. Sterling, “Social Implications of a Computerized Society” S. Marlene Pinzka, “The Computer Age” Rob Kling, “The Micro-Computer Revolution” Rob Kling, “Social Issues and Impacts of Computing” Rob Kling, “Reading List for Computing, Organizations, Society” Judith V. Grabiner, “Perspectives on Computers and Society” John O. Aikin and Ronald G. Woodbury, “Society and the Computer.”

The courses were taught in a variety of departments, including humanities, information and computer science, and history, as well as being offered as interdisciplinary courses. Some of the courses were required of all computing majors. The syllabi and reading lists provide evidence of 1) a variety of approaches to teaching the course, 2) the perceived qualifications needed to teach the course, 3) quite specific goals for some of the courses, and 4) a wide range of readings from which to build a background as a teacher and from which to choose for student readings. Later issues of Computers & Society contain some additional course syllabi and many articles which would be appropriate readings for courses in computers and society.

The latest recommendations by the ACM/IEEE-CS Joint Curriculum Task Force specify that “[t]here are approximately 11 hours of lectures recommended for this set of knowledge units [social, ethical, and professional issues].” The topics to be covered are: “historical and social context of computing, …responsibilities of the computing professional, …risks and liabilities, …[and] intellectual property.” (COM91) The report further specifies that the following kinds of activities should accompany their coverage in a course of instruction:

  1. Write a short expository paper, with references, that demonstrates understanding of the historical or social context of some specific aspect of computing.
  2. Write a short paper, with references, that discusses an incidence of misuse of computers or information technology.
  3. Discuss particular aspects of professionalism in a seminar setting. Draw conclusions about ethical and societal dimensions of the profession.
  4. Write or discuss a short paper discussing methods of risk assessment and reduction and their role in the design process.
  5. Present a case study in copyright or patent violation as a seminar discussion, with an accompanying writing assignment that demonstrates student understanding of the principles. (COM91)

The criteria for accreditation now specify that social and ethical issues must be a part of the curriculum and should be sufficient to earn about one credit for the work done.

In the past two years Computer Professionals for Social Responsibility has been gathering material about and for courses in this area. That material is available at this conference. [Published as Batya Friedman and Terry Winograd, eds., Computing and Social Responsibility: A Collection of Course Syllabi, Computer Professionals for Social Responsibility, l990.]

What Should be Taught?

The course outline presented early on by Horowitz, Morgan and Shaw covers the important topics. As the field of computing science has developed there are necessarily changes in the details of the topics covered, but essentially it is necessary for our students to understand the impact of the work they are doing, what are their responsibilities to themselves, their employers, the users of the products they develop, and to society at large. Students must also be equipped to make value judgment about what they should and what they should not do as computer professionals. The topics set forth by the ACM/IEEE-CS Joint Curriculum Task Force cover the minimal set.

What Level of Expertise Should Students Acquire?

Many of the students in computing sciences have had little exposure to philosophical and social issues. In their study of computer science and mathematics they have come to believe that an “answer” is right or wrong, a program runs or does not run. Too often this attitude is conveyed to the students by the faculty, they seldom see a variety of solutions which solve a particular problem or different approaches which solve a problem. Even less often are ethical and social issues a part of their course of study. Yet these students must go out into the profession to design systems which are life critical, which may be used to heal or to harm people, which may be used to assist law enforcement in tracking criminals or to reveal intimate details of the personal lives of particular individuals. Students must be prepared to develop their own personal and professional values and must be trained how to act upon these values. This requires that students be trained to see the implications of their work, to evaluate the impacts of this work, and to decide whether or not this is appropriate for them. Students must develop the competence to deal with a variety of common ethical situation.

Qualifications to Teach Social and Ethics Issues in Computing

There are many faculty members who now teach courses in social and ethical issues in computing and Terry Ward Bynum addresses this in his paper. The qualifications of these individuals vary over an extremely wide range. As he indicates most computer scientists can and should discuss the popular press issues with their students and colleagues. This will lead to consciousness raising, if nothing else. However, this is not sufficient to develop the kinds of expertise needed to teach and do research in the field. Faculty in many fields can, and probably should, teach this course; but to do so well requires some understanding of philosophical issues, computer science, and the ability to assign and evaluate written and oral work in the field, and to lead student discussions. Team teaching by a group of faculty members who can bring together the requisite abilities appears to work well in some situations. In other situations a faculty member with interest in the field and training in one of the areas can through courses or self-study develop the necessary background to do well in such a course. Attendance at a conference such as this can supply much material for course, ideas for conducting classes, and better methods for the evaluation of student work.

How Can We Evaluate What is Taught in Social and Ethical Issues in Computing?

There are many faculty members who now teach courses in social and ethical issues in computing and Terry Ward Bynum addresses this in his paper. The qualifications of these individuals vary over an extremely wide range. As he indicates most computer scientists can and should discuss the popular press issues with their students and colleagues. This will lead to consciousness raising, if nothing else. However, this is not sufficient to develop the kinds of expertise needed to teach and do research in the field. Faculty in many fields can, and probably should, teach this course; but to do so well requires some understanding of philosophical issues, computer science, and the ability to assign and evaluate written and oral work in the field, and to lead student discussions. Team teaching by a group of faculty members who can bring together the requisite abilities appears to work well in some situations. In other situations a faculty member with interest in the field and training in one of the areas can through courses or self-study develop the necessary background to do well in such a course. Attendance at a conference such as this can supply much material for course, ideas for conducting classes, and better methods for the evaluation of student work.

References

BOW67 Bowles, Edmund, Computers in Humanistic Research, Prentice-Hall, Englewood Cliffs, NJ, 1967.
COM82 Computers and Society, 12:4 (Fall 1982), ACM, New York.
COM91 Computing Curricula 1991: Report of the ACM/IEEE-CS Joint Curriculum Task Force, ACM, New York, 1991. Order # 201910.
HOR72 Horowitz, E., Morgan, H. L., and Shaw, A. C., “Computers and Society: A Proposed Course for Computer Scientists,” Communications of the ACM (15:4) April, 1972, 257 – 261.
MOW81 Mowshowitz, Abbe, “On Approaches to the Study of Social Issues in Computing,” Communications of the ACM (24:3) March, 1981, 146 – 155.
PAR68 Parker, Donn, “Rules of Ethics in Information Processing,” Communications of the ACM, (11:3) March, 1968, 198 – 201.
TAV Taviss I. (ed), The Computer Impact, Prentice-Hall, Englewood Cliffs, NJ, 1970.
WES15 Westin, A., Privacy and Freedom, Atheneum, New York, 1967.

The Use and Abuse of Computer Ethics

Donald Gotterbarn

The creation of courses in applied ethics – business ethics, engineering ethics, legal ethics, medical ethics, and professional ethics – is a very fertile industry. In 1982 Derek Bok, president of Harvard University, reported that over 12,000 distinct ethics courses were taught in our academic institutions.1 As the emphasis on ethics has increased so has the number of such courses.

What is the pedagogical justification for these courses? There are different justifications for different courses. The justifications depend on the curriculum in which they are taught – liberal arts, business, engineering, medicine, or law. For professional schools and professional curricula the pedagogical objectives for these courses include: introducing the students to the responsibilities of their profession, articulating the standards and methods used to resolve non-technical ethics questions about their profession, and developing some proactive skills to reduce the likelihood of future ethical problems. The type of institution that supports the course – sectarian or nonsectarian, and the department responsible for the course – philosophy, religion, or computer science – affect the course objectives.

The methods chosen and the issues discussed will vary by the domain of the ethics course – business ethics, clerical ethics, computer ethics, etc. Specific objectives claimed by professors have varied from very general to quite specific, e.g., sensitize the students to values, teach a particular professionalism , e.g., indoctrinate the students to a set of values, and teach the laws related to a particular profession to avoid malpractice suits. Rarely do such courses take the approach that they are intended to discover values. These courses generally either start from an accepted set of values and apply them in particular contexts, or they start with a variety of moral theories and go through the exercise of applying them.

Other objectives for these courses come from nonacademic sources. In computer ethics some objectives are based on a concern to prevent computer catastrophes. It is hoped that computer ethics training will eliminate unethical computer activity. This view was first promulgated in response to significant media attention given several incidents of computer trespass. Another belief used to promulgate the teaching of computer ethics is that the errors in programs are attributable to immoral behavior. It is hoped that if we train people in the ways of ethics, they will produce error free programs.

In his paper, “Human Values and the Computer Science Curriculum,” Terrell Ward Bynum2 offers as a major objective of teaching computer ethics, that such courses make it more likely that “computer technology will be used to advance human values.” This is a laudable goal for any discipline. Indeed one goal of liberal education in general is to help the student develop a sense of human values.3

“Computer ethics” is a relatively new and developing academic area. There have been several attempts to define and categorize the field and how one ought to teach it. There have been courses and textbooks dealing with ethics and computing for more than ten years. In that time computing and its impact on our society have undergone significant changes. As in most areas that are under development, several directions are attempted until the best ones are found. It is a mistake however to canonize an approach simply because it was one of the approaches tried early in the development of an area.

There are two approaches to computer ethics which I believe are mistaken in that they do not forward any of the above cited objectives, and yet they are becoming canonized as the good and the right thing to do. The two positions I am concerned with are a) a method for teaching computer ethics which some have called “pop computer ethics,”4 and b) a position that the adequate teaching of such a course requires someone trained in philosophy or theology. The remainder of this paper addresses these two positions.

  1. Derek Bok, Beyond the Ivory Tower: Social Responsibility of the Modern University, Harvard University Press, 1982, p. 123.
  2. In Terrell Ward Bynum, Walter Maner and John L. Fodor, eds. Teaching Computer Ethics, Research Center on Computing & Society, 1992.
  3. Robert K. Fullenwider, “Teaching Ethics in the University,” available from Indiana University Press and the Poynter Foundation, 1991.
  4. Bynum, op. cit.

Pop Computer Ethics

The concept of “pop” computer ethics is very broad. The goal of “pop” computer ethics is to “sensitize people to the fact that computer technology has social and ethical consequences.”5 This is not a course in sociology which might use examples of the impact of computers in the workplace as a vehicle to talk about the impact of technology on organizational structures and employment demographics and the values associated with these areas. The type of pop ethics course I am concerned with generally consists of litanies of the evils that can be promulgated with the use of computers. “Newspapers, magazines and TV have increasingly engaged in computer ethics of this sort. Every week, there are news stories about computer viruses, or software ownership law suits, or computer-aided bank robbery, or harmful computer malfunctions.”6 “Pop-ethics” courses are justified on the grounds that it is necessary to sensitize people to the fact that computer technology can “threaten human values as well as advance them.” If we presume that our students are literate and read newspapers or magazines then they already have read the tales of the threatening computer. Even if they are not literate and only watch television, they will still have this knowledge. It looks, at first blush, as if “pop-ethics” might merely be an accoutrement to the university curriculum dressing up its concerns with ethics. If it were only this, I would not be concerned with it; but I believe that such courses are in fact a threat to most of the objectives for computer ethics articulated above.

There is a common approach taken by pop ethics courses. The approach is primarily negative. Collections of stories used or discussed in these courses are entitled variously, “RISKS” or “Cautionary Tales.” This negative approach is the approach that was taken by Donn Parker in his first collection of scenarios. In that work he describes the principle used to select the scenarios. He says the scenarios were “written in such a way as to raise questions of unethical rather than ethical”.7 This negative approach has consequences for the prospective computer professional as well as for the student who does not intend to be a professional. Leon Tabak, in his excellent paper “Giving Engineers a Positive View of the Social Responsibility,”8 argues that such a negative approach fails for students who are interested in pursuing careers in computing. When they are interested in ethics, they are interested in the way they can positively contribute to the world and how they can apply their skills productively. The pictures painted of technology by such courses are essentially pessimistic. It puzzles everyone why this technology is singled out. Why not have a “gun pop-ethics” courses? I think the difficulty with this “pop ethics” yellow journalism approach is in fact more significant. I argue that this approach is also harmful to the general student population.

The types of issues singled out in negative pop-ethics courses give the impression that computer ethics issues are rare and irrelevant to the students. If computer ethics is concerned with catastrophes – e.g.,the failure of a program which controls the safety switches of a nuclear reactor – then I don’t have to worry about computing and values because the only computer I program is my microwave oven. How does the nuclear reactor story relate to the student who works part-time in the library programming the computer? All this catastrophe thinking has nothing to do with their work. One also might wonder what it has to do with ethics. If a problem is caused by a mistake – an unintentional act – then what does it have to do with ethical decisions? Other items discussed in such courses do involve intentions. They include: how easy it is to use a computer to commit fraud or to break into a hospital database. If computer ethics (the pop version) is about all of those immoral people who use computers to perpetrate evil, how does it relate to the individual moral student who always tries to do the right thing. These examples are interesting but irrelevant to these students. Major social issues are also discussed in these courses. For example, “Is it permissible to sell computers to nations which support terrorism?” The discussion of this is interesting and includes elements of geopolitics and questions about how and whether to propagate scientific discovery. For most students, however, such large questions are not within their present or future sphere of ethical decision making and are best discussed in social science or political science courses. There is not enough time in a semester to resolve such large issues. There is barely enough time to delineate all of the issues involved in questions of this complexity. The attempt to handle these questions in a single class trivializes the subject. The discussion of such complex large issues strikes many of the students as merely an “academic” exercise.

This brings me to one of my major objections to this approach, viz., the distorted impression of ethics and ethical reasoning that is often produced by a pop ethics course. These courses are not guided by a single coherent concept of computer ethics. Every piece of negative news involving a computer becomes a candidate for discussion in a pop ethics course. The breadth of the material included does not help the student get a clear concept of computer ethics. The degree to which this approach can mislead is evident in a recent work in pop-ethics. I think the authors are taken in by their own approach. They include subjects from the impact of video display terminals on health to the use of computers by organized crime and then they claim that computer ethics has no guiding principles or ethics from which we can reason.

The concept of computer ethics is further clouded by the emphasis on dilemma thinking. Under the guise of getting students to think through a complex problem, they are presented with an ethical dilemma. The following has been used as an example of computer ethics in a pop ethics course. A programmer’s mother was suffering from a rare but manageable disease which if uncontrolled will lead to a painful death. The medicine to control the disease is so expensive that the only way the programmer can pay for it is to commit computer fraud. What is the moral thing for the programmer to do? There are two problems with this type of example. First, this is not an issue in computer ethics. Although there are many ethical issues here such as the responsibility of children to their parents and the responsibility of society to make medicines available at reasonable costs, there is little here about computer ethics. To call this an issue in computer ethics because a computer is used to do the dastardly deed is like saying that beating someone to death with a law book is a problem in legal ethics. A second problem, more pervasive problem than the elasticizing of the concept of computer ethics, is that ethical issues are equated with dilemmas, i.e., issues for which there are no good resolutions. The programmer has to choose between committing fraud and allowing her mother to die. This example seems to require an action which is the rejection of one or another of our moral standards. Students do need to be made aware that ethical problems can be difficult, but the emphasis on dilemmas in these courses leads students to think that ethical problems cannot be resolved.

Not only does the structure of the pop-ethics course reinforce this no-solution view of ethics, but this view has been reinforced by the way some current literature has been constructed. For example, despite the fact that there was significant agreement on several scenarios used by Donn Parker in his early work, the only scenarios Donn Parker chose to bring forward into his new edition are those which generated the highest degree of diversity of opinion. The diversity of opinion generated by the Parker cases should not be surprising given the heterogeneity of the group rendering the opinions – lawyers, philosophers, computer managers, etc. There is significant evidence that, in professional ethics, there is actually a convergence of opinion about computer ethical standards.

From the view that there never is any agreement in ethics, there is a danger that students will conclude that it is a waste of time to think about ethical issues at all. Ethics as presented in these courses is not relevant to the student taking the course. It creates the impression that the issues of “computer ethics” are rare and that because there is no agreement, the discussion of computer ethics is useless. The emphasis on the negative side does not give the student any experience avoiding real computer ethics problems. Given the dilemma nature of the teaching, an attitude of surrender is encouraged. If ethics is a matter of opinion and all opposing arguments have equal weight, then the student will not expect support for what they consider to be a moral act. When they are placed in a situation which requires them to take a moral stand, then they are more likely not to “make a fuss” and not stand up for the moral choice.

The use of “yellow journalism” is sometimes an effective technique to fire up the masses. It presumes the existence of a set of accepted values which have been violated. The problems with this approach to computer ethics in the classroom are:

1. Portraying ethics simply as dilemmas leaves the student with the impression that ethical reasoning is fruitless. This is dangerous in computer ethics and is even more dangerous if the attitude spreads to other areas of their lives.

2. The reactive emphasis does not encourage proactive behavior. Students are encouraged merely to judge the morality of an act that has occurred rather than guide behavior to prevent or discourage immoral behaviors.

3. It encourages reactionary thinking rather than anticipatory thinking. The negative approach encourages actions against what is perceived as the value-threatening technology, rather than action to turn the technology in a value-supporting direction. For example, we are encouraged to make laws against nationwide databases rather than make laws which encourage the moral use of nationwide databases. Instead of praise for automatic teller machines, they are characterized as “… a good example of how a new technological device creates new opportunities for fraudulent activity.”

“Pop” ethics might have had a place when computing was a remote and esoteric discipline, but I believe that in the current environment that this approach is dangerous to the preservation and enhancement of values. This model of computer ethics does not forward any of the pedagogical objectives for teaching ethics cited above.

If one is to do anything like “pop” computer ethics, the typical approach must undergo some serious revision. One should look at the positive opportunities of computing and how computing technology can support our needs and further our values. If one looks at computing technology that works, one finds in many cases that it is the exercise and concern for values that had increased its chances of working. Good computing products have followed careful standards. They were built with the well-being of the computer user in mind. These revised courses should ask the students to think of new applications for computing which are consistent with their values and to evaluate the potential risks involved in such applications. They should talk about the minimal controls they would need for the development of these new applications. They should discuss some ethical cases which are real issues in computer ethics for which there are solutions. There are standards of good system design which should be discussed. Above all, students need some proactive guidance. There are effective standards for reaching ethical decisions in many situations. They should be discussed in this revised approach.

6. Bynum, op. cit.

7. Donn B. Parker, Ethical Conflicts in Computer Science and Technology, AFIPS Press, 19??

8. Leon Tabak, “Giving Engineers a Positive View of Social Responsibility,” SIGSCE Bulletin, vol. 20, no.4, 1988, pp. 29 – 37.

9. Tom Forester and Perry Morrison, Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, MIT Press, 1990, p.4.

10. For further discussion of this example, see D. Gotterbarn, “Computer Ethics: Responsibility Regained,” National Forum, Summer 1991.

11. Donn Parker, et al., Ethical Conflicts in Information and Computer Science, Technology and Business, QED Information Sciences, 1990.

12. Leventhal, Instone, and Chilson, “Another View of Computer Science: Patterns of Responses among computer scientists, “Journal of Systems and Software, special issue edited by Donald Gotterbarn, January 1992.

13. Forester and Morrison, op. cit. p.8.

Who Teaches Computer Ethics?

The revision presented above is needed to remove the dangers of pop ethics. To do this well requires much more than the retelling of horror stories by the professor. Many faculty members hesitate to discuss ethical issues in their classrooms. They feel a lack of expertise in the face of a group of academics who have reserved discussion of these issues for themselves.

The computer scientist shies away from involving their students in ethical discussions because of the apparent complexity of the philosophical approaches. Although it is true that at a refined level these philosophical theories are very complex, at the level of application the theoretical complexity can be largely ignored.

It is often said that only a philosopher should teach ethics.14 When we say this I think we have accepted the philosopher’s presumption that their moral theories can be used to solve moral problems. As a result, in works on computer ethics there are philosophy sections on teleological theories and deontological theories.15 Behind these two theories lies two approaches; one emphasizes results or consequences of actions (teleology) and the other emphasizes motive or duties (deontology). Armed with these theories we are supposed to solve the practical ethical problems which confront us daily. “Philosophers are no more educated in morality than their colleagues in the dairy barn; they are trained in moral theory [italics mine], which bears about the same relation to the moral life that fluid mechanics bears to milking a cow.”16 A mistake made by the philosopher is to portray ethics as “Pick your theory and then reason to an answer.” They believe that different theories will lead to different sets of answers. Advocating this model of reasoning reinforces the view that all ethics discussion is fruitless because there are as many answers as there are theories.

It has been shown that these theories primarily conflict on the level of theory and justification rather than on the level of action.17 No theory which opposes the inherited conceptions of what actions are considered right and wrong could expect to be given much credence. These moral theories are like all theories – merely alternate descriptions of how we move through our daily lives responding to roles, standards, purposes, and duties. Lack of detailed acquaintance with them should not prevent us from discussing ethics, since everyone has to act in the world whether or not they were trained in moral philosophy.

In professional computer ethics, there is an emphasis on a set of software engineering standards accepted by the professional community engaged in the software development process.18 These are technical standards for testing, designing, and developing quality software. Following this accepted set of standards is an obligation of the professional, and it is of little practical consequence whether that is done out of a sense of duty or because one has a contractual relation with a customer. One might ask whether the description of software engineering I have offered is based on consequentialism – doing things because of the significance of the consequences – or deontologism – doing things because they are the right thing to do. I think such a question has little relevance to the moral life of a software engineer. We emphasize the standards of the process, and following them could be described in terms of duty or deontology; but we emphasize the process of building the software because we believe the product will be better. This could be described in terms of consequentialism. The moral theory used to describe the event has little impact on the moral life. The philosopher has no special competence here.

One does not need to be a philosopher to discuss software engineering ethics. The importance one places on the theoretical dimension is a function of one’s aims in teaching the course. If our objectives are an acquaintance with philosophical ethical theory and tolerance for ambiguity and disagreement, then someone trained in philosophy would be useful. In applied professional ethics courses, our aim is not the acquaintance with complex ethical theories, rather it is recognizing role responsibility and awareness of the nature of the profession. A few hours of reading about these philosophical theories is adequate preparation.

If we are not going to teach ethical theories in the computer ethics course, then what are we going to teach? We are interested in teaching how to anticipate and avoid ethical problems in computing. We are also interested in providing techniques or methodologies which can guide our behavior when a problem does arise. This is best done within the context of our technical curriculum. For example in a class dealing with writing software requirements one could look at a case like the following. Suppose you were asked to develop a system which would collect times for ambulance trips to accident scenes and back to hospitals starting from different ambulance service locations. This data was to be used to redesign ambulance service districts to reduce the time spent getting patients to hospitals. The requirements are developed, and during prototyping it is discovered that there are a significant number of trips for which no time is recorded. In determining how to handle these zero times it is discovered that in most of these cases no time was recorded because very critical patients were being transported and the paramedic was too busy keeping the patient alive to record the time. At this stage in development it is discovered that the most significant times to the study are the times that are not recorded. The discussion of the technical and professional options in this situation is teaching computer ethics.19 They are learning how to handle a morally significant situation in an application area.

The moral reasoning involved here is not generated from some esoteric theory which requires a trained philosopher to understand. The moral reasoning here is based on reasoning by analogy. We can examine the technical alternatives and based on past experience attempt to anticipate their morally significant outcomes. It is the technical knowledge that enables us to understand the potential consequences. The judgment involve considering the technically viable alternatives and making judgments guided by both our technical skills and professional values. The technical discussion in class, deciding which are the better solutions and why, is teaching computer ethics. The use of detailed technical examples in the class is a way to develop the skill of anticipating some of these ethical problems.

One can use cases which are less technical to show the relation of values to computing decisions. If one is asked to test some software and funds are exhausted before the testing is satisfactorily completed and there is no possibility of further funds, you have several options. Whichever option you take must be conditioned by moral rules such as, “don’t deceive” “keep promises,” and “act professionally.” Depending on the type of software being tested, rules like “don’t cause pain” and “don’t kill” also might come into play. Different examples bring different moral rules into play. Consider a person who was asked to write a database for a library check-out system to determine the popularity of books and the number of additional copies that should be ordered, if any. The association of the patron’s name with the book checked out has a potential for the violation of several moral rules, such as the violation of privacy and the deprivation of pleasure because one does not feel free to read what one wants to read.

I also believe the student needs some general acquaintance with ethical argumentation. They need to understand how different ethical values compete and how they sometimes only have limited application. They need to see how these values get prioritized and how that prioritization affects decisions. This can be accomplished by having the student read articles in professional ethics on analogous issues. When discussing the ambulance case one can read articles in Stevenson’s book20 or Johnson and Snapper’s anthology21 I have argued in the first part of this paper about the dangers of pop ethics so one must be careful that the examples chosen meet the following conditions:

  1. It is not told because it is impossible to resolve.
  2. It has enough detail to be able to do technical analysis.
  3. The main protagonist is not morally bankrupt.
  4. It is related to an issue in computing.
  5. It can be discussed using moral values.

The stories should help develop a proactive attitude and be directly relevant to the class topic.

The pedagogical goal of discussing ethics in the technical curriculum is development of the following set of skills:

  1. The ability to identify correctly the potential for an ethical problem in a particular context and/or to identify what moral rules are being compromised.
  2. The ability to identify the cause of these issues; determine several alternate forms of action consistent with morality in that context; and, for each of these possible actions, determine expected outcomes and reasons for taking or not taking that action.
  3. The ability to select a workable solution and work through the situation, either technically or morally.

Teaching these skills does not require learning new theories. These are process skills; therefore the emphasis should be on a process that can be applied to changing contexts. These skills, like other abstract skills, are learned by practice. A single class meeting or a single chapter of a textbook discussed in one course will not develop these skills. To teach a single ethics course or have a special professor who is the ethics professor reinforces the mistaken notion that ethics and the practice of computing are not related. What is needed is practice several times in a course. A case study methodology does this best, with each case addressing one or more specific phases of the software development process. This methodology involves giving a brief description of a professional situation that might involve an ethical problem. The class discusses the situation trying to identify if there is an ethical issue. If they find a situation which involves a violation of moral rules, they try to determine alternate approaches which would eliminate or at least reduce the moral difficulty. In work done by James Rest, the case methodology has been shown to be most effective when professors and students discuss the case as peers, The results of this work means that it is better if the professor is not an ethics specialist.

I believe computer ethics should be taught like engineering ethics, i.e., ethical issues directly related to the practice of engineering should be discussed in every engineering course. The ABET standard states: “An understanding of the ethical, social, and economic considerations in engineering practice is essential… as a minimum it should be the responsibility of the engineering faculty to infuse professional concepts into all [italics mine] engineering coursework.”22 Discussions of computer ethics should be integrated throughout the curriculum. In studies done in business ethics courses at the University of Delaware, it was proven that this is the most effective style of teaching professional ethics. It is also good if a major project course for seniors can be used to tie together most of the professionalism issues discussed throughout the curriculum.

Computing has come a long way both technically and ethically. We have learned how to apply moral rules and values to computing decisions. This skill and knowledge should be the subject of good computer ethics courses.

East Tennessee State University

14 M.B. Mahowald and A.P. Mahowald, “Should Ethics be Taught in a Science Course?,” The Hastings Center Report, vol 12, no 4. (Aug 1982) p. 18.
15 Deborah G. Johnson, Computer Ethics, Prentice Hall, 1985.
16 Robert K. Fullinwider, ibid., p.2.
17 Richard DeGeorge, Business Ethics, (New York, 1982).
18 For a complete analysis of this contract basis of computer ethics, see D. Gotterbarn, “Value Free Software Engineering: A Fiction in the Making,” in Proceedings of the Workshopon Software Engineering Education, IEEE International Software Engineering Conference 1991.
19 For a full analysis of this case, see D. Gotterbarn, “Professionalism and Ethics,” a video-taped lecture in Software Project Management for the Software Engineering Institute’s videoa dissemination project. April 9, 1991.
20 J.T. Stevenson, Engineering Ethics: Practices and Principles, Canadian Scholar’s Press 1987.
21 Deborah Johnson and John W. Snapper, eds., Ethical Issues in the Use of Computers, Wadsworth Publishing Company, 1985. [out of print]
22 Accreditation Board for Engineering Technology (ABET), Criteria for Accrediting Programs in Engineering in the United States, 1981.

Courting Culture in Computer Science

Batya Friedman

Computer science is practiced within a technical context steeped in logic, representations, and techniques. It is also the case that computer science has pervasive social consequences: invasion of privacy, worker surveillance, computer-based fraud, automated warfare, to name a few. Thus, it is increasingly obvious to those inside and outside our field that our technical activity needs to be responsive to the social aspects of computing.

To proceed with this endeavor, I shall suggest that computer scientists need more adequately to court culture. I use the phrase – courting culture – to signify a process through which we integrate the technical with the social and ethical, and refine our humanitarian sensibilities in the course of our everyday computing practices. Elsewhere (Friedman & Kahn, in press), I have discussed this process in the context of computer system design. I shall speak today about how we might court such a culture from an educational perspective within an academic institution. In doing so, I draw from my recent experience as Director of the Interdisciplinary Computer Science Graduate Program at Mills College where we have, to some extent, explored such a courtship.

The ideas here are organized around two broad types of educational activities: structured and unstructured. Structured educational activities refer to the explicit curriculum (such as specific courses and curriculum units). Unstructured educational activities refer to the rich background of educational activities that support the structured curriculum (such as communications through electronic mail, departmental colloquia, and faculty advising). Both types of activities are important. I will, however, emphasize unstructured activities for they often go unrecognized when considering the social aspects of computing.

Structured Educational Activities

Perhaps the most widespread structured approach that addresses social and ethical considerations in computer science education is a stand-alone course on the subject. For example, at Mills College we have offered two such courses, one for non-technical students on Computing & Society, and one for technical students on the social responsibilities of the computer professional. The syllabi for these courses at Mills, along with those of many other courses such as Terry Bynum’s course on computer ethics, can be found in an edited collection by Friedman and Winograd (1990). This syllabi collection includes the following course topics: social implications of computing, social analyses of computing, ethics for computer professionals, computers in the arts, computers in the military, computers in the third world, and computers in education.

Drawing from relevant research in the field of engineering ethics education (Baum, 1980), one limitation of stand-alone courses, however, seems apparent: Such courses separate consideration of social and ethical concerns from the rest of students’ technical experience and learning. Partly in response to this limitation, integrative approaches have been advocated. One approach entails the integration of curriculum units on the social and ethical aspects of computing throughout the technical curriculum (Miller, 1988). Another approach integrates the social and ethical aspects of computing with selective components of the curriculum, particularly those that involve people-centered computer system design (Winograd 1990, 1991). At Mills we have explored this second approach in a graduate level technical communications course in which students design materials to support computer use in an actual setting. For example, in the 1989 – 90 academic year, when Mills automated the college library, students worked with library staff to design (a) an online map to help patrons locate materials and facilities in the library, (b) hardcopy materials to help patrons navigate the online catalog system, and (c) hardcopy materials to help librarians comprehend and execute the computer system back-up procedures. Course instruction emphasized not simply accurate and even elegant design, but design and design process highly responsive to the human context.

The integrative approaches go some distance to decompartmentalize student engagement with the social and ethical aspects of their technical knowledge. Nonetheless, in my view, we need additional mechanisms to embrace the whole of students’ educational computing experience. The following unstructured activities can provide such mechanisms. The goal is to provide a larger context for what students in their future lives and careers will take to be the practice of computing.

Unstructured Educational Activities

Unstructured educational activities are not rigidly fixed or systematic; rather they represent informal efforts to promote and support opportunities for student engagement. Five such activities follow. Likely enough, computer science faculty and departments already engage in some of these unstructured activities to support their technical education. I want to sketch how these same unstructured activities can be used to integrate the technical with the social and ethical.

  • Electronic mail and bulletin boards. The information we bring to students’ attention through electronic media carries an implicit message about what students need to know and be concerned about as members of a computing community. Most of our electronic mail and bulletin boards for students communicate technical or pragmatic information: tips for homework, notice of new machines and software on campus, when and where a particular user’s group will meet, how to obtain new shareware, and the like. However, more can be done. For example, we can involve students in bulletin boards, like RISKS, that do discuss the social aspects of computing. And we can challenge students intellectually based on current issues. For example, consider the recent controversy over Lotus’ proposed software to provide marketing information about millions of Americans. Electronic mail to students could not only inform them of the situation but engage them in substantive discussion of, say, the implications of a software design for potential privacy violations. Through such broader use of electronic media we draw students into the larger societal controversies and discussions.
  • Informal classroom discussion. In our technical courses, we can respond to current social issues relevant to the course material. For example, on a course on algorithms it could be appropriate to discuss recent court decisions on the patentability of algorithms and the implications for algorithms as intellectual property. This is not to say that such discussions should be lengthy or occur all that often. It is to say that the discussions should be genuine in the sense that they draw on our own interest, and convey the immediacy and importance of the issues. Indeed, an absence of such informal classroom discussion can communicate to students that the technical can and should be separated from the affairs of computing in the larger society.
  • Departmental colloquia. In the span of a year at Mills we typically intersperse in our computer science colloquia series two or three social topics among the technical ones. In general, such colloquia can provide both faculty and students with common ground on social topics that then serves as a basis for on-going discussions. Moreover, since colloquia tend to be highly visible forums, they can help validate the value of such discussions.
  • Student involvement in school computing policy. Students directly encounter the social aspects of computing through the policies that govern their own school computer use. Such encounters provide rich educational opportunities. For example, at Mills, computer science undergraduates and graduate students participated in discussions about student access to specialized computer equipment. Through the discussions, students became keenly aware of how different policy decisions could affect their peers and their sense of community. While bounded by certain faculty parameters, students with faculty helped determine policy. Other policy areas that are amenable to student participation include, for example, allocation of computer time, welcoming novices into the computer center, promoting access to information, and establishing security for systems. (For more detail, see Friedman, 1986, 1991). By examining and defining social policy, students learn to navigate through some of the very issues they will encounter in their later lives as computer professionals.
  • Faculty advising. How do we advise students? For example, what is our response when a student comes to us with a social or ethical concern? Do we say, perhaps, “Yes, uh huh, but shouldn’t you be spending your time on your technical work?” Or in some other way do we dismiss or change the topic? Or, instead, do we say something like, “Yes, that’s interesting, and how does that social or ethical concern inform on your technical work?” That is, we can help students to see that their social concerns need not be in opposition or in competition with their technical education. The same holds true for career advising. For example, are we prepared to advise a student who tells us she wishes to pursue robotics but does not want to contribute to the development of “smart” bombs? Or are we left silent (as I was) because we are unaware of non-military options? Perhaps I am overstating the case based on my own experiences and those of close colleagues. I do think, however, that students construct understandings of themselves in relation to the field based partly on what we as faculty bring to the advising table.

Conclusion

I began my talk today with the suggestion that computer scientists need more adequately to court culture. I have tried to provide a sense of what is meant by this idea through the structured and unstructured activities. Taken as a whole, these activities represent a position that education involves, among other things, a process of social transformation. Now, this is a powerful if not, unfortunately, loaded term because of recent discussions surrounding “political correctness”? and leads to ideas I do not have time to develop here. But because of the possibility of misinterpretation, I should at least mention that in my view social transformation is bounded by an objectivity. For partly through the nature of rigorous analytic scrutiny not everything goes. Not every position can be defended. Indeed, from a non-political perspective, it is the rigorous analytical scrutiny in consort with humanistic sensibilities that lead us as a discipline to respond constructively to the pressing social problems that arise out of our very practice.

Colby College

References

Baum, R. J. (1980). Ethics and engineering curricula. Briarcliff Manor, NY: The Hastings Center.

Friedman, B. (1986, October). If I only had one more computer… Facing the sticky issues of resource allocation. Classroom Computer Learning, pp. 44 – 45.

Friedman, B. (1991). Social and moral development through computer use: A constructivist approach. Journal of Research on Computing in Education, 23, 560 – 567.

Friedman, B. & Kahn, P. H., Jr. (in press). Human agency and responsible computing: Implications for computer system design. Journal of Systems and Software.

Friedman, B., & Winograd, T. (Eds.). (1990). Computing and social responsibility: A collection of course syllabi. Palo Alto, CA: Computer Professionals for Social Responsibility.

Miller, K. (1988). Integrating computer ethics into the computer science curriculum. Computer Science Education, 1, 37 – 52.

Winograd, T. (1990). What can we teach about human-computer interaction? Proceedings of CHI’90, 443 – 449.

Winograd, T. (1991). Introduction to the project on people, computers, and design (Report No. CSLI-91-150 PCD-1). Palo Alto: Stanford University, Center for the Study of Language and Information.

Appendix: Track Report on Teaching Computer Ethics

National Conference on Computing and Values
Report on the Track:
Teaching Computer Ethics

The National Conference on Computing and Values occurred on the campus of Southern Connecticut State University, August 12 – 16, 1991. One of the six conference “tracks” was devoted to the topic “Teaching Computer Ethics.” The Track Coordinator for that track was Professor Keith Miller of the Computer Science Department at the College of William and Mary. This Appendix is Professor Miller’s summary report on the activities, events and recommendations of that track.

Keith Miller

The Teaching Computing and Values track at the NCCV conference was the most heavily attended. Most of the participants considered themselves educators, but they had diverse backgrounds in industry, business, and government. This diversity led to lively and (in the main) constructive discussions. From comments that I received, many people found the track sessions as well as the other conference events challenging and informative. The diversity and the size of the teaching track complicated the job of coordination, but the resulting breadth of experience was, in my opinion, the major strength of the teaching track.

Whenever there are lists of people, I have tried to list them alphabetically by last name.

Track Pack

Each track attendee received a “Track Pack” of readings. The track pack materials were separated into four sections: Food for Thought, Portfolio Materials, Track Event Working Papers, and The NCCV bibliography. The contents of the first three sections were:

Food for Thought:

Bynum, T. W., “Three ‘levels’ of computer ethics.” Working paper for NCCV.

Moor, J. H., “What is Computer Ethics?” Metaphilosophy, Vol. 16, 1985, pp. 266 – 275.

Shneiderman, B., “Human Values and the Future of Technology.” Proceedings of CQL ’90. Computers & Society, October, 1990, pp. 1 – 6.

Portfolio Materials:

Miller, K., “Integrating computer ethics into the computer science curriculum.” Computer Science Education, Vol. 1, 1988, pp. 37 – 52.

Gotterbarn, D., “A ‘Capstone’ Course in Computer Ethics.” Working paper for NCCV.

Turner, J., Personal communication, March, 1991.

Track Event Materials:

Bynum, T. W., “Human Values and the Computer Science Curriculum.” Track address working paper for NCCV.

Lidtke, D. K., “Realities of Teaching Social and Ethical Issues in Computing.” Working paper for NCCV.

Martin, C. D., and H. J. Holz., “Non-Apologetic Computer Ethics Education: A Strategy for Integrating Social Impact and Ethics into the CS Curriculum.” Working paper for NCCV.

Gotterbarn, D., “The Use and Abuse of Computer Ethics.” Working paper for NCCV.

Friedman, B., “Courting Culture in Computer Science.” Working paper for NCCV.

NCCV Bibliography:

(Available elsewhere in the NCCV materials.)

Track Meeting Organization

On the first day of the conference, 48 attendees had signed up for the teaching track. The entire group met for an initial session. Sylvia Pulliam described an ongoing study concerning the teaching of computer ethics, and she distributed survey questions for this study. Keith Miller had some introductory remarks, and then separated the participants into four groups. The division was organized according to the area of expertise of the participants, as reflected in their registration materials; the coordinator tried to assign the groups to maximize the diversity of backgrounds in each. Although the groups were to be equal in size, late arrivals and early departures conspired to enlarge some groups and diminish others. Because of the comings and goings, the list of participants here may not be complete. I regret leaving anyone’s name off because all contributed to the groups.

After the groups had formed, they worked independently on Tuesday, Wednesday, and part of Thursday. Then the groups reassembled for a track meeting to discuss group findings and to organize a report for the whole conference. A small group was formed which coordinated the track presentation given to the conference on Friday morning.

Highlights of Group Discussion

Before giving a brief overview of each group’s discussions, I will take the liberty of including some personal observations. As coordinator, I visited each of the four small groups during their discussions. Each group had its own personality, goals, and protocol. At least when I was there, some groups focussed on writing positions, and others emphasized discussions. There seemed to be intense interest in knowing how people in different circumstances approached both the subject and the pedagogy of computer ethics.

In my pitifully short encounters with each group I was struck by the number of times people would stop and reflect on what was said; it indicated to me that people were not just talking and debating – they were learning from each other as well. My short reports here cannot do justice to that aspect of these four groups: the learning that went on during the hours of group meetings and the informal discussions that went on after the discussions.

The overviews given below are drawn from written materials by the groups themselves and (in some cases) from my own notes taken during a group’s session. I have done some editing of the materials given to me by the groups. Any errors or omissions are my responsibility.

Group 1

Members: Bob Barnes (Philosophy, Lehigh University), Tim Bergin (CS, American University), Fran Grodzinsky (CS, Sacred Heart University, Bruce Jawer (IBM), Penelope Karovsky (CSSCR, University of Washington), Ed Lowry (DEC), Jim Novitzki (North Central College), Dave Nuesse (CS, University of Wisconsis-Eau Claire), Mary Ann Pellerin (Ed Tech & Media, Central CT State University), Helen Wolfe (Teikyo Post University).

The divisions of opinions in this group are reflected in a series of comments that came from the wide ranging discussions:

Students need intellectual tools for reasoned debate and discussion. Resource: “Understanding Moral Theories” by Claude Harris. Instill in students the ability to make choices. What’s really important is discovering that there are choices and questions about those choices.

Moral theory’s two pitfalls: paralysis of analysis and a series of cases that do not converge. How do we make the theoretical real? Can we transfer gut level experience to the classroom. Resource: Tom Snyder productions has a media simulation about ethics. Students can put together a code of ethics and try it out on cases. Classroom discussions can generate productive conflict.

Stealing software is stealing… isn’t it? There are already policies. Software as idea and as product. But technology shakes up traditional definitions like property. Copyright law destroys the “softness” of software. Are we witnessing the death of the software industry? Libraries of software can be a public good. Different strategies of paying for software development: pay up front and then allow copying.

How are software standards related to computer ethics? Who bears responsibility for the impact of computer product development? What responsibilities are attached to the designation “computer professional”?

Don’t kill the poets! Over-restriction can stifle creativity. Has technology outstripped our ability to manage it? Is there a difference between computer ethics and engineering ethics?

Political and economic forces underpin “moral” issues; these issues are also political and economic issues. Are we examining problems of moral choices or problems of legal definition? Challenge students to agree on definitions to enable productive debate about the underlying issues. The intersections of legal and ethical questions are of particular interest.

A “guardian angel” cannot be automated. But codes, laws, and norms can act as a guardian, if not an angel. “Do you obey the current law?” is one way to pose questions; another way is to explore the grey areas, where ideas are nebulous and ill-defined. The grey area is intellectually challenging.

Group 2

Members: Mary Carian (Alverno College), Yvon Cayoette (Philosophy, University of Quebec), Eduardo Chaves (Brazil), Peter Danielson (Applied Ethics, University of British Columbia), Ed Davis (Westfield Companies), Batya Friedman (CS, Colby College), Don Gotterbarn (CS, East Tennessee State University, J.A.N. Lee (CS, Virginia Tech), Ivon Lowsley (CS, Southwest Missouri State University, Roberta Barta Mohagen (BUED), Eric Roberts (CS, Stanford University), Mary Robinson-Slabey (CS, Mansfield University), Bill Schnippert (Business & Economics, Elmira College), Evelyn V. Stevens (Computing Services, University of Delaware), Bob Workman (Southern CT State University).

Eric Roberts acted as the moderator of group 2, and he produced this list of recommendations from the group:

1. Increasing Support for Computing and Values

Need to improve the status of computing and values within CS:

  • ensure more attention to teaching values in tenure and promotion cases
  • place more value on teaching in general
  • expand notion of what constitutes CS faculty
  • change expectations of what constitutes a reasonable research grant size
  • ACM, IEEE, and CSAB are important forces toward enhancing status
  • need a scholarly journal in the field

Does enhancing academic status diminish student acceptance?

  • status is enhanced by firmer grounding in traditional disciplines, such as philosophy, but students are most receptive to applied ethics
  • counterpoint: CS departments need people with the expertise to apply classical analysis

Need to develop ethical expertise within CS programs:

  • do we all need to be half computer science and half philosophy?
  • consider joint appointments with other fields
  • consider team-teaching courses with faculty from other fields
  • propose to NSF that they solicit a half-dozen proposals for position papers on how to solve the staffing problems propose some kind of Chataqua-style courses for faculty development

Need to strengthen the intellectual depth of the field:

  • no good body of intellectual work yet exists
  • consider carefully the question of language: computer ethics is too restrictive and we need to include a wider spectrum of social and ethical concerns
  • encourage Communications of the ACM to include more material on social implications
  • Need to develop greater involvement by employers:
  • employers are not getting what they need: more education and less training
  • counterpoint: current curriculum is too highly focused on employment.

Need to consider educational issues across the university:

  • students are not getting a broad education; more exposure is needed to social science and humanities as background to computing and values
  • might need a fifth year to provide time to cover both technical and social issues
  • may need to narrow focus, leaving rest to broader curriculum
  • we should export our material to other disciplines

2. Resources and Materials

Need more communication channels for exchanging resources:

  • Canadian mailing list offered as example
  • ETHICS-L distribution list
  • comp. risks and the anthologies in Software Engineering Notes
  • SIGCAS is a useful reference and distribution medium
  • Arthur Anderson ethics group
  • essential to be inclusive: networks are important but by no means universal and we need to maintain traditional postal distribution
  • can the Research Center on Computing & Society be of help
  • offer: Evelyn Stevens will distribute her ongoing materials collection assuming minimal support is available for postage (important to check with authors)
  • offer: Peter Danielson has a collection of materials gathered from comp.risks and other sources on Caller ID, and he will make resource lists available

Need collections of case studies:

  • Donn Parker’s SRI collection
  • CACM computing and ethics self-assessment
  • useful to have real situations as opposed to hypotheticals
  • comp.risks provides a source
  • New York Times/Wall Street Journal can be current sources (need for caution?)

Other sources:

  • short films and other audiovisual materials need to be produced
  • people who embody the dilemma are excellent resources
  • CPSR has significant collections of material, some of which is chapter-based
  • IEEE Annals of the History of Computing
  • next Journal of Systems and Software devoted to computer ethics
  • we should make more use of SIGCSE and its newsletter

3. Integration Throughout Curriculum vs. Capstone Courses

Where should this teaching be done in the curriculum?
ideal is integration

  • practical considerations may force a separate course in some institutions
  • evaluation is often missing in integrated approach
  • need to convince students that values are universal
  • case study among peers seems to be most effective method
  • exposure does increase sensitivity
  • recommendation: the Research Center on Computing & Society can help distribute test instruments and other evaluative tools
  • need to establish objectives for student achievement

Should such a course be the responsibility of Computer Science?

  • consensus: CS should have control, although others may be involved
  • students are sometimes dissatisfied with a more general ethics course
  • need more empirical data
  • Exxon studies mentioned as plausibility argument
  • Should this be required or elective?
  • CSAB and ACM/IEEE guidelines insist that it be required at some level
  • counterpoint: some opposition was expressed to general idea of requirements

Other notes:

  • students don’t come in a vacuum; there is an ethical context
  • students will have several careers; danger of overspecialization
  • we ourselves need to take an honest approach to ethics; can’t sugarcoat

Group 3

Members: Peg Cibes (Math, University of Hartford), Judy Edgmand (CS, Oklahoma State University), Jim Green (Northern Michigan University), Joyce Currie Little (CS, Towson State University), Michael McFarland (CS, Boston College), Bob Minicucci (Consultant), Stanley Polan (CS, Franklin Pierce College), Sylvia Pulliam (CS, Western Kentucky University), Nancy Saks (CS, Wittenberg University), Wojciech Suchon (Logic, Jagiellonian University), Carolyne Tropper (CS, Rhode Island College), Mary B. Williams (Center for Science & Culture, University of Deleware).

Sylvia Pulliam acted as moderator of this group. The group produced a series of lists, recorded by Sylvia:

WE NEED:

Teaching Materials:

  • texts: easy texts at first, then progressively more chal lenging – a text should be classified according to its intended audience: CS and non-CS students have different requirements in a text – Media: video, etc. – Monographs – proactive (positive) as well as reactive (negative) emphases needed – some materials should be packaged for those uncomfortable teaching ethics – evolution techniques – collect and distribute current writings – support for hypertext and multimedia – clipping file

Faculty Training and Development:

  • local and inexpensive – run by computer science faculty in conjunction with other faculty from more than one of the following other departments: philosophy, humanities, social sciences
  • invite the press (those with some scientific expertise) to seminar as participants to broaden their perspective
  • Methodology:
  • case studies – discussion – oral presentations – position papers – debates: assign rules – individual or profession; trial & cross-examination – reading and writing journals: formalizing thoughts – directed discussion – interview: example, people who have had computers enter their workplace; find out how people interact with computers – professor’s personal ethical code – differentiate between solid and poor logical arguments: based on facts, accurate, well-grounded, absence of contradictions – cases: sometimes one side is clearly “right”? sometimes “right” is not so clear

Developing process:

  • determine what is ethically correct – supererogatory – above and beyond the call of duty; the better thing to do; example: whistle-blowing

Difficult to teach…. why?

  • not as factually based as programming; not as “skill oriented” – CS students are more oriented to specific, modularized tasks – software has consequences and is human related – there is not an assumed background of agreement in ethical matters – we are trying to change attitudes and behaviors – appropriation of the material is important: make it your own – don’t have inflated expectations for this course

Ethics is appropriate in all disciplines:

  • we need this to be a focus at the university level

SOME RELEVANT ISSUES FOR SPECIFIC COMPUTER SCIENCE SUBJECT AREAS

Operating Systems:

  • case studies about security issues – Clifford Stoll’s book, The Cuckoo’s Egg – acting out a part to dramatize perspectives of developers and users – user friendliness for operating systems to reduce stress

System Design:

  • human interface – Aegis system: remember the eventual environment when designing; remember the intended environment when using (management issue)
  • equal access for handicapped – assigned access to appropriate individuals

System Management:

  • access according to the need to know – policy issues: security, privacy
  • Ethics Course in a Philosophy Department:
  • Cornell worm

Group 4

Members: Karl Klee (CS), Ellen M. Lee (Business Admin., University of New Orleans), Peter Limper (Philosophy & Religion, Christian Brothers University), Illona Maruszak (Nursing, Southern Connecticut State University), Keith Miller (CS, William & Mary), W. Waldo Roth (CS, Taylor University), Roy (Philosophy).

Illona Mauszak moderated group 4, and her notes form the basis of this part of the report. The group produced a document based on the three questions “what?” “so what?” and “now what?” The group’s discussion had as a theme the development of a computer system for the use of nurses at a large hospital. This theme gave concrete examples of the computer ethics issues described.

WHAT?

The meaning of a “success” in this implementation:

  • nursing staff insisted on the power necessary to influence the choice of the system they would be using – the system developed met many of the users’ specific needs – the system “empowers” users (the nurses) and not others more traditionally associated with power at a hospital (doctors and administrators)

Problems:

  • the system required changes in the way nurses worked – unexpected limitations and difficulties arose when the system was first used – there arose the need to “sell” the system to some nurses; they resisted the conversion

Conclusions:

  • the ethical application of computing techniques in all fields should include awareness of and responsiveness to the needs, problems, concerns, and values of the ultimate users
  • ideally, this should entail user participation in the design process
  • in many cases, the concept of “informed consent” is helpful: users should “consent to” the technology; the technology should not be imposed on users in a paternalistic fashion
  • user participation often raises power issues similar to other power issues discussed at the NCCV; existing institutional power structures (within the user organization, between users and vendors, etc.) may make user participation difficult or impossible. However (as Judith Perrolle suggested in another context), the development of a computer system may be (and should be) “empowering” for the users.

SO WHAT?

  • If (as Deborah Johnson argues) computer systems “embody values,” it is important that those values reflect the real needs and concerns of the ultimate users. An awareness of this should be one outcome of the teaching of computing and values.
  • Although much lip service is paid to user participation (or “joint application development”?, students may not have a clear sense of the ethical issues this raises.
  • Design problems typically begin with rather abstract, high-level goals. Students should be more aware of the need to focus on the more concrete experience of the end users.
  • Users tend to be non-technical people. Students need to learn patience and sensitivity to the concerns of non-specialists. Know when to answer questions that users may not know enough to ask!
  • Students need to be more aware of institutional power issues in design of computer systems. Computer professionals must be prepared to work within organizational constraints, but in some cases they must try to overcome these constraints to help empower the end users.
  • Many computer science students will become “organizational decision makers”? (see new ACM ethics code proposal, section 3). It’s important that they be prepared to exercise their decision making power with a concern for the needs and values of those using computers within their organizations.

NOW WHAT?

Some specific ideas for teaching computing and values:

  • Include discussion of such topics as concern for end-users, user participation in design, systems embodying values, and organizational power issues in system design.
  • Try to make use of case studies in considering the issues above.
  • Invite actual users to class to tell their stories.
  • Model a design situation in class with students playing the roles of users, administrators, and designers.
  • Beware of power issues in the classroom; be prepared to share some power with the students.
  • Encourage the development of students’ “people skills,” their ability to deal with non-specialists and with non-technical issues.

Joint Presentation to the Conference

The four subgroups of the teaching track displayed a wealth of perspectives, opinions, emphases, and approaches in their separate discussions. When the large group met, it was clear that no consensus opinions were likely to be hammered out in any reasonable length of time. The track coordinator resisted boiling down this rich mix of contraction and learning-in-process into a linear 20 minute speech by the coordinator. A majority (albeit it a slim one) of the track participants eventually agreed to trying something different for the track’s presentation to the conference on Friday morning.

The “something different” required a list of issues and contentions culled from the discussions of the subgroups. Illona Maruszak and Penelope Karovsky compiled and edited this list, and served as moderators of the presentation to the conference. When discussing this list, we found that three themes reoccured in each group’s discussions: pedagogy (How should we teach computer ethics?), philosophy (What is computer ethics?) and power (How do power relationships effect computer ethics?). A pair of people volunteered to represent each perspective:

Pedagogy: Judith Edgmand and Don Gotterbarn

Philosophy: Peter Limper and Jim Moor

Power:Keith Miller and Carolyne Tropper

During the presentation, the list of 20 issues was available to the conference attendees. The moderators read a selection of the issues, and each of the three perspectives could comment on each issue read. On some issues, each perspective had a comment, and on others only one or two perspectives had comment. During the comments, Batya Friedman tracked the course of the comments on a graphic she created that portrayed the three perspectives and their intersections. Thus, as the comments proceeded, there was a visual presentation of how a particular issue was or was not significant to each perspective.

The list (given below) was available to the commentators on Thursday, but the comments given were a mixture of prepared statements and immediate responses. The handouts given to the conference attendees included the following information:
INTRODUCTION
The format of our presentation is based on three questions:

  1. What? Identify an aspect of teaching computing and values.
  2. So what? Explain why this aspect is important.
  3. Now what? Suggest concrete actions to improve this aspect.

As we reviewed notes from our subgroups’ brainstorming sessions, we found that issues seemed to fit under three broad headings that we call the three P’

s: Pedagogy, Philosophy, and Power. Each perspective is represented by one of the three chairs you see on the stage. The moderators will present a particular what (some aspect of teaching computing and values), and that will invoke comment from the chairs concerning the so what and now what of that aspect.

LIST OF WHATS

  1. What are the values we are trying to teach?
  2. Technology may be a vehicle for positive social change and empowerment.
  3. There is a need for teaching materials, for intellectual tools concerned with ethics and values in the field of computer science.
  4. How do we cope with the fact that we can do more technologically than we can manage as human beings?
  5. Introduce students to moral theory, but be careful of two pitfalls: paralysis of analysis and continued debate without convergence.
  6. Does an ethicist have any control over the computer science curriculum?
  7. Should ethics be integrated into existing computer science curricula, taught separately, or both?
  8. Is computer ethics different from business ethics or engineering ethics?
  9. Technology requires us to question traditional definitions of terms.
  10. How do political and economic forces influence development and use of technologies?
  11. How could we involve users in the design, selection, and implementation of computers and software?
  12. Is it the responsibility of the computer scientist to educate individuals using the products of computer technology?
  13. “Don’t kill the poets.” We need to distinguish between which problems are and are not technological.
  14. How do we support an ethical environment that empowers faculty and students?
  15. Dilemma of unenforceable rules regarding computer use.
  16. Should the user have the opportunity to give informed consent related to computer use?
  17. Effective ways to introduce topics on ethics and values include: scenarios, role playing, visiting speaker implementing a system.
  18. If a computer science curriculum in ethics and values is more rigorous, will student involvement decrease?
  19. How much of the following content should be presented to computer science majors: participatory management, change process and leadership?
  20. It is important to remember that computers and software embody values.

Coordinator’s Summary

I am sure that the presentation given to the conference on Friday morning was unique in style, and I hope that it was effective in raising some issues that were important to the participants in our track. I thought it was fitting that the members of the teaching track attempted to communicate in a non-traditional manner to the rest of the conference. The content of computing and values is complex and many layered; the power relationships in computing and the power relationships in academia are complicating factors in teaching about computing and values; and the methods necessary to communicate about these issues in the classroom are still exploratory. Faced with all of these challenges, I maintain that people interested in teaching about computing and values must exploit their creativity and be willing to take risks.

Having advocated creativity and risk, I will now appeal to structure to avoid the risk of sacrificing content to style. The ideas and work of the participants in the teaching track deserve, in my opinion, careful consideration by people interested in teaching computing and values. In our group presentation, we tried to communicate ideas using a method at least designed to engage and provoke people directly. In this written report, I will try to organize the ideas in a more traditional essay form:

  • Introduction

In his charge to the track coordinators, Terry Bynum, Co-Chair of the NCCV Conference, asked us several questions that we could dwell on when summarizing our group efforts:

  1. How can we identify major issues and problems in the area of teaching computing and values?
  2. What can the Research Center on Computer & Society do in the future to encourage progress in the teaching of computing and values? What do you need and how can the center help?
  3. The NSF funded the NCCV. What other projects might be appropriate for NSF funding in the area of teaching computing and values?

In the essay that follows, we’ll describe several issues identified in group discussions, and suggest answers to the three questions above.

  • Teaching Computing and Values: Three Themes and Many Challenges

The teaching track participants were adept at discovering thorny problems. These problems are not only numerous, they are interdependent and often ill-defined. In trying to understand these problems better, we tried to discern themes that recurred in our discussions, meta-problems that appeared in different forms in many specific issues. Three such themes that seemed particularly significant were pedagogy, philosophy, and power. These themes surface often in the discussions below.

A. What Should Be Taught about Computing and Values?

Before we can determine the best way to teach computing and values, we must agree on what should be taught. Many terms are included when discussing what should be taught: classical ethics and computer ethics, computers and society, computer crime and legal issues, professionalism and professional codes; each of these has some claim to legitimate inclusion. Presently, most educators are not completely convinced that anyone has found an ideal mix of topics or a single best approach to computing and values. It’s unlikely that universal agreement will emerge, but its disturbing that the questions are still being discussed at a fairly superficial level.

Many computer science educators do not include computing and values in their individual courses or in their overall curriculum. Part of that reluctance can be traced to a confusion, even among those well versed in the field, about what computing and values includes. The resulting inactivity in this area sends a strong negative message to our students: the human values involved with computing are not important enough to include in our programs.

Surely this is not the message we, the attendees at NCVV, want to send. Such a message encourages computer scientists (and computer science students) to think of themselves as technocrats without responsibility to other people. Such a message suggests that it is always someone higher up in an organization that determines what is right or wrong – we are merely implementing their plan.

The message of inactivity is in one sense easy: we need not wrestle with the many difficulties in teaching computing and values. But ignoring values is difficult to justify when we consider our responsibilities as professionals, citizens, and teachers. Thus it is important that we try to give more satisfying answers to the fundamental question: what is valuable in computing and values, so valuable that it MUST be included in any reputable computing curriculum?

The teaching track participants did not uncover an easy answer to the question of what should be taught, but they did discuss several avenues to explore. First, the difficult theoretical questions about computing and values require a disciplined, scholarly exploration. We will not be able to identify the most important questions until we have explored them more deeply than we have thus far. In order to encourage this scholarship, we must support scholars in this area. For example, computer science departments must recognize the importance of teaching in general and teaching computing and values in particular. This includes expanding the boundaries of acceptable research, and recognizing at tenure time that computing and ethics is a legitimate academic pursuit. This may require a rethinking of educational issues that are campus wide. Recent calls for a return to an emphasis on teaching in universities may signal that at least some are ready for such a rethinking.

To deserve more academic recognition, current researchers in computing and values must raise the scholarly level of work in this area. Researchers can initiate joint projects with scholars in other disciplines, and make concerted efforts to publish in reputable journals. As the level and quantity of scholarship rise, a journal devoted to this field could be initiated.

As we struggle with the question of what to teach about computing and values, we must include non-academics in our deliberations. End users of computing systems, employers, and government officials all have a stake in the professionalism of our students. They may offer new ideas and new resources in the teaching of computing and values. If students are to be sensitive to human values, they must be more aware of different perspectives on computing. We, the teachers, must be equally aware of these concerns.

The Research Center and the NSF can help support the scholarly activity necessary to make progress on this question. Certainly NCCV, a cooperative effort of the Center and NSF, has stimulated thinking and research about this question. The NSF could increase the resources available for individual and group research efforts in this area. As results are published, the Center can keep us informed, and can challenge us with new and deeper questions that result from the research.

B. What Resources Are Available for Teaching Computing and Values?

The track materials include many suggestions for teaching resources, and the group discussions include several more. Several efforts are beginning to facilitate the sharing of resources. The Center’s huge effort to produce and disseminate materials from this conference is an excellent example of support for teachers of computing and values. Continued effort in this area, with NSF support, could be a major force in promoting the study and development of this area.

There are several other resources mentioned in the group notes above: all the books in the NCCV bibliography; science fiction literature, especially the cyberspace emphasis; SIGCAS and its publications; CPSR and its publications; comp.risks and SE Notes; ETHICS-L e-mail list; CACM self-assessment; IEEE Annals of the History of Computing; special issues and single articles in the computer science research literature (January 92 issue of Journal of Systems and Software is devoted to computer ethics); video tapes of news reports and analyses of current computer issues; and so on. The Center can facilitate sharing of these resources by acting as a clearinghouse for the information or for pointers on how to obtain them from elsewhere.

Peter Danielson, a participant in the teaching track, volunteered the following announcement regarding resource sharing:

The Centre for Applied Ethics at the University of British Columbia has a special interest in business and professional ethics, including ethics and computing technology. Two current projects may be of special interest to those teaching computing ethics.

The Canadian Business and Professional Ethics Network (CBPENet) is a three year research project funded by an Applied Ethics Strategic Grant from the Social Science and Humanities Research Council of Canada. The network is intended to facilitate research in the field by providing computer assisted communication for researchers and research users and the collection, storage and dissemination of materials, including computing ethics. Those interested should contact the moderator, Louis Marinoff at the Centre for Applied Ethics at CBPENET@unixg.ubc.ca

Computer Ethics through Thick & Thin is a three year research project funded by an Applied Ethics Strategic Grant from the Social Science and Humanities Research Council of Canada. The project will investigate the ethical potential of computer mediated communication by creating two virtual colloquia that differ in the information available to their members. The Thick group will know each other only as continuing pseudonyms; the Thin group will be able to access whatever information its members will contribute. The two colloquia are based on e-mail in order to encourage wide membership. The groups will discuss ethical issues raised by computer technology, such as privacy and ownership and control. For a description of the project and information about how to join either group, please contact Peter Danielson at danielsn@unixg.ubc.ca

For more information about the centre and its projects, please contact:

Centre for Applied Ethics University of British Columbia
1866 Mail Mall E-16? ?
Vancouver, B.C. Canada V6T 1Z1
Phone: (604) 822 5139
FAX: (604) 822 8627
E-mail: ethics@unixg.ubc.ca

C. What Curriculum Adjustment Is Best for Teaching Computing and Values?

This question received a great deal of attention in our track. There were arguments for the following kinds of organization: a computer science course devoted to computing and values (or some variation thereof), modules or single lectures integrated into a course or courses in a traditional curriculum, a capstone course combining software engineering and computer ethics, and encouraging other departments (e.g., philosophy and sociology) to offer courses in this area. Most participants agree that ALL these organizations could be useful, and that none excludes the others. Since the teaching of computing and values is in a nascent stage, I think that efforts should continue in all these different methods, and that we should share our positive and negative experiences as we develop expertise in them.

The Center and NSF can encourage the development in one or more of these alternative organizations by funding pilot projects in teaching computing and values. Teachers with experience in one method could be invited to write proposals to develop materials for others to use. These materials could include detailed course outlines, overhead transparencies, case studies, test materials, video taped lectures, video interviews with the principals in either factual or fictional cases, textbooks or experimental book chapters, and so on.

The Center can also cooperate in the ongoing effort of CPSR in collecting, editing, and distributing course syllabi in the area of computing and values. This effort, along with electronic mail sharing, has been significant in popularizing many useful curriculum materials in the past few years.

D. How Does Computer Ethics Differ from Traditional Ethics?

The tension between the study of philosophical ethics and ethics applied to computing generated lively discussion in the teaching track. Computer science teachers are worried about “the paralysis of analysis” and other pitfalls traditionally associated with theoretical ethics. On the other hand, many computer science professionals recognize the need for increased sophistication in reasoning through problems associated with complex technologies and with increasingly critical human interactions with computing.

The participants in the track came to no consensus on this issue, but I will venture some personal observations. In order to do a credible job in teaching (and research) about computing and values, we cannot ignore the huge body of scholarship that exists in ethics. We need to become familiar with the existing literature, and we need to gain the critical skills necessary to do ethical analysis. However, we cannot become so enamored with the dilemmas posed by theoretical ethics that we lose our effectiveness in applying the ethics. (Don Gotterbarn made this point several times at NCCV, and he convinced me of its importance.) I think we should look to other fields of applied, professional ethics as we attempt to tread the fine line between over-analysis on the one hand and shallow analysis on the other. I think the subgroup #2 report said it well: “computer science should have control, although others may be involved.”

I have mentioned ethics specifically because that is the discipline most often discussed in our track. However, a similar question can be asked of other disciplines which have something to say about computing and values: sociology and law are two examples which were prominent at NCCV. How can computer scientists assimilate important ideas from all these areas yet still maintain credibility as computer scientists? Similarly, how can researchers in philosophy, sociology, and law gain sufficient knowledge about computing to make meaningful contributions in the area of computing and values? Again, I cannot speak for the group but instead must make personal judgment’s. I think NCCV itself is a prime example of how scholars can efficiently explore other disciplines: in cooperation with scholars from other areas. We need to foster interdisciplinary exchanges and collaborations.

The Center and the NSF can support this collaboration financially, by sponsoring conferences and workshops and by funding research projects that explicitly require researchers from more than one discipline. The Center can also encourage this work by initiating such projects, suggesting interesting collaborations between scholars the Center has identified in different disciplines. With the pervasiveness of e-mail, such collaborations can take place at a distance with a minimum of inconvenience. If the Center organizes another interdisciplinary conference, such scholarly duets could add a new dimension – feature pairs of scholars in a single collaborative presentation as keynotes.

E. How Do We Strike a Balance Between Emphasizing Personal Responsibility and Awareness of Societal, Organizational Aspects of Computing and Values?

Two of the keynote speakers (Judith Perrolle and Gary Chapman) spoke at length about the power relationships involved in computing and ethics, and about the societal role in promoting or discouraging ethical behavior in the area of technology. These issues, not surprisingly, became an important part of our track discussions as well. Again, the only consensus I discerned was that both emphases – personal as well as societal and organizational responsibilities – were important. Their relative importance in a teaching situation were controversial. My suspicion is that advocates for both positions learned more about the opposing position during the discussions.

F. Revival, the Ethical Computer, and Juggling

In his keynote speech, Terry Winograd discussed the question of whether computer ethics did or did not fit the three models: “revival, the ethical computer, and juggling.” It struck me as interesting to look for ways in which these three models did capture some of the content of the teaching track discussions.

  • Revival:

A major benefit of the group discussions was a sense that others were committed to to teaching of computing and ethics; many of us feel isolated from our local colleagues about this subject, and it was encouraging to hear so many other people speak with conviction and intelligence about this subject. NCCV served as a revival as well as a resource, and I think this was important to many teaching track participants.

  • The Ethical Computer:

It’s true that many issues concerning computing and values are complex and ambiguous. But members of the teaching track also discovered that many other situations ARE fairly clear and amenable to reasonable solution when computing professionals and students are sensitive to the values dimension of their work. Thus, the existence of ethical dilemmas that will not yield to any single systematic approach should not discourage us from attempting to organize and perhaps formalize techniques for exploring possible solutions to ethical dilemmas. Thus, an unwarranted bias against “algorithms for ethics” may be premature. No, machines cannot think about human values for us, but we may be able to use computers to help organize our thoughts and retrieve relevant information about computing and values.

Furthermore, with commitment to human values and with technical ingenuity, there may be technical solutions to thorny situations. Again, the computer is not ethical, but computers may be part of a solution to an ethical problem. For example, the empowerment of nurses by a computer system that enforced their professionalism is a case where computers supported human values.

  • Juggling:

An important part of the juggling analogy is the aspect of public performance. Certainly teaching requires attention to this aspect, and the track participants shared many ideas for more effective presentations. We even tried some of these out. In addition, juggling cooperatively seems appropriate to teaching computing and values, since we need to cooperate with our students (even more than when we are teaching programming), other computer science faculty (who should support the emphasis on values in their courses), and with researchers in other disciplines (with whom we must share).

In summary: The teaching track, like the NCCV as a whole, benefited greatly from the diversity of the participants and the intensity of the week long conference. The Research Center on Computing & Society Center and the National Science Foundation are to be commended for enabling us to learn from each other. We hope they will continue their efforts to promote the teaching of computing and values.

The College of William and Mary