On the Emerging Global Information Ethics

What is the cultural and historical significance of information technology?

On the Emerging Global Information Ethics table of contents

Anonymity on the Internet and Ethical Accountability

Terrell Ward Bynum

[This brief position paper summarizes a presentation by the author at the Conference on Anonymity on the Internet organized by the American Association for the Advancement of Science in autumn 1997 (funded by the National Science Foundation).]

In 1994 Helen Nissenbaum published an article in Communications of the ACM entitled “Computing and Accountability.” In that article Nissenbaum spelled out important relationships among the concepts of responsibility, blame, and accountability. She also made a strong case for the view that accountability “encourages diligent, responsible practices” and provides “the foundation for just punishment as well as compensation for victims.” Nissenbaum noted:

Responsibility and blameworthiness are only a part of what is covered when we apply the robust and intuitive notion of accountability…. When we say someone is accountable for a harm, we may also mean that he or she is liable to punishment (e.g., must pay a fine, be censured by a professional organization, go to jail), or is liable to compensate a victim (usually by paying damages). In most actual cases these different strands of responsibility, censure, and compensation converge because those who are to blame for harms are usually those who must “pay” in some way or other for them.

In that same article, Nissenbaum identified four “barriers to accountability” associated with current computing practices. These include (1) the problem of “many hands” in which a wide variety of individuals and institutions can be involved in the design and creation of a computer system, (2) the ease with which people accept so-called “bugs” in a computer program as unavoidable, (3) the tendency to treat computers as “scapegoats” for a variety of errors, and (4) the desire to own a computer program without accepting liability for it.

I would like to suggest that yet another “barrier to accountability” exists; namely, anonymity on the Internet. If a person gets onto the Net and anonymously engages in some kind of harmful activity (e.g., malicious hacking; defamation of character; extensive “spamming”; dissemination of computer viruses; industrial espionage, etc.), it may not be possible to hold such a person accountable. We therefore would be unable to assess blame, enforce the law, prevent repetitions, secure compensation, or gain other benefits of accountability.

So, given the many social and ethical benefits of accountability, one might be tempted simply to argue that anonymity on the Internet should be banned – that the identity of anyone on the Net should always be immediately available wherever he or she goes in cyberspace.

This extreme view, though, seriously conflicts with privacy; and there are many circumstances in which one wants privacy to be preserved – for example, engaging in discussions of “sensitive” topics such as, HIV, abortion, gay lifestyles, breast cancer, politically unpopular topics, etc. Even when we are simply “surfing” the Web – browsing topics of interest, shopping in cyberspace, reading the news, doing research, job hunting, whatever – we want to preserve our privacy. For if we could be tracked wherever we go in cyberspace, and a record were to be kept of how long we stayed at each site, what topics and organizations we spent time with, what we purchased, how much money we spent, and so forth and so on, a revealing and privacy-invading profile could be created on each of us.

It is clear, then, that there are circumstances in which it would be socially beneficial to preserve anonymity on the Internet, as well as circumstances in which anonymity on the Net would be harmful and undesirable. How do we know which case is which? And, more importantly, how do we secure privacy and anonymity when they are desirable, while nevertheless preserving accountability when that is needed? There is an urgent need to answer these questions.

Personally, I would start my search for answers by assuming that privacy is the default – that privacy is to be preserved wherever possible and balanced against other human values like justice and security. And clearly, to the extent that anonymity is necessary to privacy, it should be preserved in those cases where privacy should take precedence.

In our search for answers, one helpful way to deal with circumstances in cyberspace is to argue by analogy using similar circumstances that are not in cyberspace. This method is often very productive because aspects of “ordinary” circumstances frequently carry over into cyberspace with similar results. For example, in non-cyberspace contexts, we regularly sacrifice some of our privacy by exchanging it for confidentiality. Thus, in order to get the benefits of medical doctors, psychologists, accountants, religious counselors, etc., we share private information with them on condition that they keep it confidential. Although, strictly speaking, privacy is lost in such a case, many of the benefits of privacy are preserved through confidentiality. In addition, in a case like this, if it suddenly becomes necessary to hold someone accountable, the confidential information can be disclosed in ethically defensible ways to appropriate people.

Similarly, the idea of having “trusted third parties” in cyberspace – agents to whom one entrusts private information on condition that it be held in confidence – can be used to gain the benefits of privacy and anonymity in cyberspace, while at the same time preserving the possibility of holding people accountable in extraordinary circumstances. For example, when an individual wishes to make a business transaction in cyberspace, he or she could do so through a trusted third party, who pledges not to reveal to others any information about the customer or the transaction. In extraordinary circumstances, though, confidential information could be revealed to the proper persons in an ethically defensible manner. Perhaps many different activities on the Net could be “confidentialized” through a trusted third party – or through various kinds of clever software – thereby preserving the benefits of anonymity, while retaining accountability under extraordinary circumstances.

In the past, society has developed many ways to permit anonymity under normal conditions, but at the same time make it possible to “unanonymize” a person in extraordinary circumstances. Thus, in non-cyberspace contexts there are many times when a person is effectively anonymous – walking down a crowded street, shopping in a store and paying with cash, going to the movies, fishing in a stream, etc. Just as society has developed various ways to identify “anonymous” agents when the need arises (e.g., eye witnesses, finger prints, opening sealed records, etc.), perhaps it will be possible to devise reliable ways to “unanonymize” someone in cyberspace who normally could go on his way unnoticed. What are the cyberspace equivalents of eye witnesses, finger prints, unsealed records, etc.?

Ethics in the Information Age

Terrell Ward Bynum
Southern Connecticut State University

As we stand on the verge of the information age, the social and ethical implications of information and communication technology (ICT) are enormous – and mostly unknown! ICT is developing so rapidly that new possibilities emerge before the social consequences can be fathomed (Rogerson and Bynum 1995). New social/ethical policies for the information age, therefore, are urgently needed to fill rapidly multiplying “policy vacuums” (Moor 1985). But filling such vacuums is a complex social process that will require active participation of individuals, organizations, and governments – and ultimately the world community (See Bynum and Schubert 1998, also van den Hoven 1996).

Globalization – Górniak (1995) has perceptively pointed out that ICT makes possible – for the first time in history – a genuinely global conversation about ethics and human values. Such a conversation has implications for social policy that we can only begin to imagine. Traditional borders and barriers between countries have now become less meaningful because most countries are interconnected by the Internet. For this reason, individuals, companies and organizations in every culture can engage in global business transactions, distance education, cyber-employment, discussions of social and political issues, sharing and debating of values and perspectives. Will this “global conversation” bring about better understanding between peoples and cultures? – new shared values and goals? – new national and international laws and policies? Will individual cultures become “diluted,” homogenized, blurred? These are just a few of the many social/ethical issues emerging from globalization of ICT. (See Bynum and Rogerson 1996.)

The worldwide nature of the Internet has already led to many issues in need of policies to resolve them. For example, if sexually explicit materials are provided on a web site in a culture in which they are permitted, and then they are accessed by someone in a different culture where such materials are outlawed as “obscene,” whose laws and values apply? Should the “offending” person in the first culture be extradited to the second culture and prosecuted as a purveyor of pornography? Should the values of the first culture be permitted to undermine those of the second culture via the Internet? How can such cultural clashes be reasonably resolved?

Or consider business transactions in cyberspace: Whose laws apply to business on the Internet? When people in one country purchase goods and services from merchants in another country, who should regulate or tax the transactions? And how will “cyberbusiness” in a global market affect local business? – local tax collections? – local unemployment? What new laws, regulations, rules, practices should be adopted, and who should formulate them? What policies would be fair to all concerned?

And how will global cyberbusiness affect the gap between rich nations and poor nations? Will that gap get even worse? Will ICT lead to a “new colonialism” in which the information rich lord it over the information poor? Will economic and political rivalries emerge to threaten peace and security? What kinds of conflicts and misunderstandings might arise, and how should they be handled? – and by whom?

Or consider cyber medicine: Medical advice and psychological counseling on the Internet, “keyhole” surgery conducted at a distance, medical tests and examinations over the net, “cyber prescriptions” for medicine written by doctors in one part of the world for patients in other parts of the world – these are just a few of the medical services and activities that already exist in cyberspace. How safe is cyber medicine? Who should regulate, license, control it?

Or consider education in cyberspace: Hundreds of universities and colleges worldwide now offer educational credit for courses and modules. But when students earn university credits from all around the globe, who should set the standards? Who should award degrees and certify “graduates”? Will there be a “Cyber University of the World”? Will thousands of “ordinary” teachers be replaced by a handful of “Internet-superstar teachers”? – or perhaps by teams of multimedia experts? – or even by educational software? Would such developments be wonderful new learning opportunities, or instead be educational disasters? What policies, rules, practices should be adopted and who should develop them

At the social/political level of education, what will be the impact upon previously uneducated peoples of the world when they suddenly gain access to libraries, museums, newspapers, and other sources of knowledge? How will access to the world’s great newspapers affect “closed” societies with no free press? Are democracy and human rights necessary consequences of an educated population with access to a free press? Will the Internet foster global democracy? – or will it become a tool for control and manipulation of the masses by a handful of powerful governments? – or powerful corporations?

Human Relationships – Of course, not all social/ethical issues which arise from ICT depend upon its global scope. Consider, for example, the impact of ICT on human relationships. How will family relationships or friendships be affected by mobile phones, palmtop and laptop computers, telecommuting to work and school, virtual-reality conferencing, cybersex? Will the efficiency and convenience of ICT lead to shorter work hours and more “quality time” with the family? – or will ICT create instead a more hectic and breathless lifestyle which separates family and friends from each other? Will people be isolated in front of a computer hour after hour, or will they find new friendships and relationships in “virtual communities” in cyberspace – relationships based upon interactions that never could occur in regular space-time settings? How fulfilling and “genuine” can such relationships be, and will they crowd out better, more satisfying face-to-face relationships? What does all this mean for a person’s self-realization and satisfaction with life? What policies, laws, rules, practices should be put in place?

Social Justice – As more and more of society’s activities and opportunities enter cyberspace – business opportunities, educational opportunities, medical services, employment, leisure-time activities, and on and on – it will become harder and harder for ICT “have-nots” to share in the benefits and opportunities of society. Persons without an “electronic identity” may have no socially recognized identity at all! Therefore social justice (not to mention economic prosperity) requires that society develop policies and practices to more fully include people who, in the past, have had limited access to ICT resources: women, the poor, the old, persons of color, rural residents, persons with disabilities, even technophobes.

A good example is “assistive technology” for persons with disabilities. A variety of hardware and software has been developed in recent years to enable persons with disabilities to use ICT easily and effectively. As a result, people who would otherwise be utterly dependent upon others for almost everything suddenly find their lives transformed into happy, productive, “near-normal” ones. Visual impairments and blindness, hearing impairments and deafness, inability to control one’s limbs, even near-total paralysis need no longer be major impediments to happiness and productivity. Given such dramatic benefits of assistive technology, as well as rapidly decreasing costs, does a just society have an ethical obligation to provide assistive technology to its citizens with disabilities?

Work – Work and the workplace are being dramatically transformed by ICT. More flexibility and choice are possible (e.g., teleworking at home, on the road, at any hour or location). In addition, new kinds of jobs and job opportunities are being created (e.g., webmasters, data miners, cybercounselors, and so on). But such benefits and opportunities are accompanied by risks and problems, like unemployment from computer-replaced humans, “deskilling” of workers who only push buttons, stress keeping up with high speed machines, repetitive motion injuries, magnetism and radiation from computer hardware, surveillance of workers by monitoring software, and ICT “sweat shops” that pay “slave wages.” A wide range of new laws, regulations, rules and practices are needed if society is to manage these workplace developments efficiently and justly.

Government and Democracy – ICT has the potential to significantly change the relationship between individual citizens and governments – local, regional and national. Electronic voting and referenda, as well as e-mailed messages to legislators and ministers, could give citizens more timely input into government decisions and law making. Optimists point out that ICT, appropriately used, can enable better citizen participation in democratic processes – can make government more open and accountable – can provide easy citizen access to government information, reports, services, plans and proposed legislation. Pessimists, on the other hand, worry that government officials who are regularly bombarded with e-mail from angry voters might be easily swayed by short-term swings in public mood – that hackers could disrupt or corrupt electronic election processes – that dictatorial governments might find ways the use ICT to control and intimidate the population more effectively than ever before. What policies should be put in place to take account of these hopes and worries?

Intellectual Property and Ownership – In the information age, the “information rich” will run the world, and the “information poor” will be poor indeed! Possession and control of information will be the keys to wealth, power and success. Those who own and control the information infrastructure will be amongst the wealthiest and most powerful of all. And those who own digitized intellectual property – software, databases, music, video, literary works, educational resources – will possess major economic assets. But digitized information is easily copied and altered, easily transferred across borders, and therefore the piracy of intellectual property will be a major social problem. Even today, for example, in some countries over ninety percent of the software is pirated – not to mention the music and video resources! What new laws, regulations, rules, international agreements and practices would be fair and just, and who should formulate or enforce them?

It is also possible to mix and combine several types of digitized resources to create “multimedia” works of various kinds. A single program, for example, might make use of bits and snippets of photographs, video clips, sound bites, graphic art, newsprint and excerpts from various literary works. How large must a component of such a work be before the user must pay copyright royalties? Must the creator of a multimedia work identify thousands of copyright holders and pay thousands of copyright fees in order to be allowed to create and disseminate his work? What should the rules be and who should enforce them? How can they be enforced at all on the new frontiers of cyberspace?

Concluding Comment – The above paragraphs identify only a small fraction of the social and ethical issues that ICT will generate in the coming information age. The vast majority of such issues are still unknown, and they will only come into view when the powerful and flexible new technology of ICT generates them. It is the goal of computer ethics to identify and analyze the policy vacuums and help to formulate new social/ ethical policies to resolve them.

* A multimedia version of this paper was presented at De Montfort University in Leicester, UK in May 1998 at the Research Seminar “Living and Working in the Information Age” hosted by the Centre for Computing and Social Responsibility.

References

Terrell Ward Bynum and Simon Rogerson, eds., Global Information Ethics, Opragon Press, 1996 (published as the April 1996 issue of Science and Engineering Ethics).

Terrell Ward Bynum and Petra Schubert, “How to Do Computer Ethics – A Case Study: The Electronic Mall Bodensee” in M. J. van den Hoven, ed., Computer Ethics – Philosophical Enquiry, Erasmus University Press, 1998, pp. 85 – 95.

Krystyna Górniak, “The Computer Revolution and the Problem of Global Ethics” in Bynum and Rogerson, 1996, pp. 177 – 190.

James H. Moor, “What Is Computer Ethics?” in Terrell Ward Bynum, ed., Computers and Ethics, Blackwell, 1985, pp. 266 – 75 (published as the October 1985 issue of the journal Metaphilosophy).

Simon Rogerson and Terrell Ward Bynum, “Cyberspace: the Ethical Frontier” in The Higher Education Supplement to the London Times, June 9, 1995.

M. J. van den Hoven, “Computer Ethics and Moral Methodology” in Porfirio Barroso, Simon Rogerson and Terrell Ward Bynum, Eds., Values and Social Responsibilities of Computer Science, Proceedings of ETHICOMP96, Complutense University Press, 1996, pp. 444 – 453. (Republished in Metaphilosophy, July 1997, Vol. 28, No. 3)

The Computer Revolution and Global Ethics

Krystyna Górniak-Kocikowska
Southern Connecticut State University, USA

This paper is based upon my view of the nature of the Computer Revolution that is currently transforming the world:

  1. The Computer Revolution causes profound changes in peoples’ lives worldwide. In cyberspace, there are no borders in the traditional sense. The borders, as well as the links between individuals worldwide, will be increasingly defined in terms of the degree of an individual’s ability to penetrate cyberspace.
  2. Because of the global character of cyberspace, problems connected with or caused by computer technology have actually or potentially a global character. This includes ethical problems. Hence, computer ethics has to be regarded as global ethics.
  3. Up to the present stage of evolution of humankind there has not been a successful attempt to create a universal ethic of a global character. The traditional ethical systems based on religious beliefs were always no more powerful than the power of the religion they were associated with. And no religion dominated the globe, no matter how widespread its influence was. The ethical systems that were not supported by religion had an even more restricted influence.
  4. The very nature of the Computer Revolution indicates that the ethic of the future will have a global character. It will be global in a spatial sense, since it will encompass the entire globe. It will also be global in the sense that it will address the totality of human actions and relations.
  5. The future global ethic will be a computer ethic because it will be caused by the Computer Revolution and it will serve the humanity of a Computer Era. Therefore, the definition of computer ethics ought to be wider than that proposed, for example, by James Moor in his classic paper, “What Is Computer Ethics?” (Moor, 1985) If this is the case, computer ethics should be regarded as one of the most important fields of philosophical investigation.

The Computer Revolution

In his presentation of the anatomy of the Computer Revolution, Moor (see Moor, 1985) uses an analogy with the Industrial Revolution in England. He notes that the first stage of the Industrial Revolution took place during the second half of the Eighteenth Century, and the second stage during the Nineteenth Century. This is a span of about 150 years. Let me compare this with what happened after the printing press was invented in Europe. (Of course, books were printed in China already around the year 600 CE.)(2)

Gutenberg printed the “Constance Mass Book” in 1450, and in 1474 William Caxton printed the first book in the English language. Already in 1492 “the profession of book publishers emerges, consisting of the three pursuits of type founder, printer and bookseller.” (Grun, 1982) This was, roughly speaking, forty years after the invention of the printing press, the same amount of time Moor says the Computer Revolution needed for its introduction stage. In 1563, the first printing presses were used in Russia. (This was the same year in which the term “Puritan” was first used in England, one year before the horse-drawn coach was introduced in England from Holland, and two years before pencils started to be manufactured in England.) And in 1639, the same year in which the English settle at Madras, two years after English traders were established in Canton and the Dutch expelled the Portuguese from the Gold Coast, the first printing press was installed in North America, at Cambridge, Massachusetts. This is about 140 years from the first publication of printed text by Johann Gutenberg – almost the same amount of time Moor considers for both stages of the Industrial Revolution.(3)

Another point made by Moor in “What is Computer Ethics?” is just how revolutionary the computer is. He argues that logical malleability makes the computer a truly revolutionary machine – computers can be used to do almost any task that can be broken down into simple steps. Moor challenges the “popular conception of computers in which computers are understood as number crunchers, i.e., essentially as numerical devices.” (p. 269) He further writes:

The arithmetic interpretation is certainly a correct one, but it is only one among many interpretations. Logical malleability has both a syntactic and a semantic dimension…. Computers manipulate symbols but they don’t care what the symbols represent. Thus, there is no ontological basis for giving preference to numerical applications over non-numerical applications. (p. 270)

Here, too, the similarity between a computer and a printing press seems to be evident. Like the printing press, computers serve to transmit thoughts. The appearance of the printing press meant both a technological revolution, as well as a revolution in the transport of ideas, communication between human minds. The same can be said about a computer.

On the other hand, the function of the most important machines invented at the end of the Eighteenth Century – the steam engine and the spinning machine – was replacement of manual labor. But the primary function of the printing press, and the computer as well, lies in the fact that both increase so incredibly the efficiency of the labor of human minds – and not only the individual mind. Computers, like the printing press, allow human minds to work faster and more efficiently, because of their groundbreaking impact on the communication and exchange of ideas. Like the printing press, they are creating a new type of network between human individuals, a community existing despite the spatial separation of its members.

I have written elsewhere about the impact of the printing press on the western hemisphere. (Górniak, 1986) Here, I would like to mention only two of the many changes caused by the invention of movable typeface. Mass-production of texts, and hence their growing accessibility, made reading and writing skills useful and caused a profound change in the very idea of education. Gradually, the ability to read and write became an indispensable condition of a human being’s effectiveness in functioning in the world.

Printed texts also made it possible to acquire knowledge individually (i.e., not through oral public presentation) and freely (i.e., without control of either the individual tutor or the owner of the collection of manuscripts). One of the results of this situation was the loss of belief that knowledge means possession of a mystery, a secret wisdom, inaccessible to outsiders. Knowledge became an instrument which everyone could and should use. Faith in the power and universal character of the individual human mind was born – and with it a new concept of the human being. The masses of believers who used to obey the possessors of knowledge discovered that they were rational individuals capable of making their own judgments and decisions. This paved the way for the two new ethical theories that were ultimately created by Immanual Kant and Jeremy Bentham.

The Printing Press and Ethics

Since many authors who write on the subject of computer ethics, including such prominent scholars as James Moor, Terrell Bynum and, above all the author of the first major textbook in the field, Deborah Johnson, use the ethics of Bentham and Kant as the point of reference for their investigations, it is important to make clear that both these ethical systems arrived at the end of a certain phase of profound and diverse changes initiated by the invention of movable printing type.(4) The question is: were these ethical systems merely solving the problems of the past or were they vehicles driving humankind into the future?

The ethical systems of Kant and Bentham were created during the time of the Industrial Revolution, but they were not a reaction to, nor a result of, the Industrial Revolution of the 18th and 19th Centuries. There was no immediate reaction in the form of a new ethical theory to the invention of the printing press. Rather, problems resulting from the economic, social and political changes that were caused by the circulation of printed texts were at first approached with the ethical apparatus elaborated during the high Middle Ages and at the time of the Reformation. Then, there was a period of growing awareness that a new set of ethical rules was necessary. The entire concept of human nature and society had to be revised. Hobbes, Locke, Rousseau and others did that work. Finally, new ethical systems like those of Kant and Bentham were established. These ethics were based on the concept of a human being as an independent individual, capable of making rational judgments and decisions, freely entering “the social contract.” Such a concept of the human being was able to emerge in great part because of the wide accessibility of the printed text.

The ethics of Bentham and Kant, then, were both manifestations of and summaries of the European Enlightenment. They were created at a time when Europeans were experimenting with the idea of society’s being a result of an agreement (a “social contract”) between free and rational human individuals, rather than submission to divine power or to the power of Nature. Moreover, such a new, contractual society could have been created in separation from traditional social groups. The conquest of the world by Europeans – called by them geographic “discoveries” and colonization of “new” territories – made it possible. Locke’s definition of property as appropriation of nature by one’s own labor, plus lack of a concept of private property in most of the invaded societies, helped that task.

Thus, despite their claims to universalism, Kant’s as well as Bentham’s concept of human being refers to European man as defined by the Enlightenment – free and educated enough to make rational decisions. “Rational” means here the type of rationality that grew out of Aristotelian and scholastic logic and those mathematical theories of the time of the Printing Press Revolution. This tradition was strengthened by ideas from Pascal, Leibniz and others; and it permitted one to dismiss from the ranks of partners in discourse all individuals who did not follow the iron rules of that kind of rationality. The term “mankind” did not really apply to such individuals. Finally, this tradition turned into Bentham’s computational ethics and Kant’s imperialism of duty as seen by calculating reason.

The nature of both these ethical systems must be very attractive and tempting for computer wizards, especially for those who grew up within the influence of the “Western” set of values. It is quite easy to imagine that there could be a “yes” answer to a question asked by James Moor – “Is Ethics Computable?” (Moor, 1996) – if one has Bentham’s or even Kant’s ethical systems in mind.

It now seems to me very likely that a similar process of ethical theory development will occur, although probably less time will be needed for all phases to be completed. The Computer Revolution is revolutionary; already computers have changed the world in profound ways. Presently, though, we are able see only the tip of the iceberg. Computer technology generates many new situations and many new problems, and some of these are ethical in nature. There are attempts to solve these problems by applying existing ethical rules and solutions. This procedure is not always successful, and my claim is that the number and difficulty of the problems will grow. Already, there is a high tide of discussions about an ethical crisis in the United States. It is starting to be noticeable that traditional solutions do not work anymore. The first reaction is, as is usual in such situations, “let’s go back to the old, good values.” However, the more computers change the world as we know it, the more irrelevant the existing ethical rules will be and the more evident will be the need for a new ethic. This new ethic will be the computer ethic.

The Global Character of Ethics in the Computer Era

Revolution, more than any other kind of change, means that two processes take place simultaneously: the process of creation and the process of destruction. The problem is that in a human society this usually causes conflict, because both creation and destruction can be regarded as a positive (good) or negative (bad/evil) process. The assessment depends on the values accepted by the people (individuals or groups) who are exposed to the revolutionary changes.

Moor writes: “On my view, computer ethics is a dynamic and complex field of study which considers the relationships among facts, conceptualizations, policies and values with regard to constantly changing computer technology.” (Moor, 1985, p. 267) This is a broad enough definition to be accepted by almost everybody; but a problem arises when we realize how many people may be affected by and interested in those “facts, conceptualizations, policies and values” – how diverse this group is. In my opinion, we are talking about the whole population of the globe! Computers do not know borders. Computer networks, unlike other mass-media, have a truly global character. Hence, when we are talking about computer ethics, we are talking about an emerging global ethic – and we are talking about all areas of human life, since computers affect them all. What does this mean for the understanding of what computer ethics is?

For one thing, computer ethics cannot be just another professional ethics. Writers like Deborah Johnson (Johnson, 1994) and Donald Gotterbarn (Gotterbarn, 1992) sometimes appear to assert that computer ethics is simply a kind of professional ethics. I support wholeheartedly the idea of a code of ethics for computer professionals. However, there are at least two problems that arise if we take computer ethics to be just a type of professional ethics:

  1. Unlike, say, physicians or lawyers, computer professionals cannot prevent or regulate activities that are similar to their own but performed by nonprofessionals. Therefore, although many of the rules of conduct for physicians or lawyers do not apply to those outside of the profession, the rules of computer ethics, no matter how well thought through, will be ineffective unless respected by the vast majority of – maybe even all – computer users. This means that, in the future, the rules of computer ethics should be respected by the majority (or all) of the human inhabitants of the Earth. In other words, computer ethics should become universal, it should be a global ethic.
  2. Let’s assume that computer ethics applies only to computer professionals. Such professionals are not totally isolated from the society in which they function. The role of their profession is significantly determined by the general structure of the society in which they are included. At present, there exist various societies and cultures on earth. Many of them function within different ethical systems than those predominantly accepted in the United States or even in the “western world.” Hence professional ethics, including ethical codes for computer professionals, may differ among cultures to the point of conflict. And even if they do not differ, conflict may still be unavoidable. Example: computer professionals in two countries who happen to be at war may obey the same rule that computers should be used to strengthen national security. In such a situation, computers may become a weapon more deadly than the atomic bomb. Discussions like those about scientists responsible for the use of nuclear energy may now apply to computer professionals. And given the power of computer technology, the potential for destruction may be even greater than the case of the atomic bomb. Or consider another example: it is well known that the United States CIA monitors the Internet for security reasons. However, the question arises whether this means that certain ethical rules, such as respecting privacy, do not apply to certain people? If the CIA does not need to respect an ethical code, who else is entitled to break the rules and on what grounds? If one country can do it, what moral imperatives should stop other countries from doing the same? Let’s assume that such moral rules could be found. If they are better, why shouldn’t they be applied on a global scale?

Problems like those described above will become more obvious and more serious in the future when the global character of cyberspace makes it possible to affect the lives of people in places very distant from the acting subject’s location. This happens already today, but in the future it will have a much more profound character. Actions in cyberspace will not be local. Therefore, the ethical rules for such actions cannot be rooted in a particular local culture, unless the creators of computer ethics accept the view that the function of computers is to serve as a tool in gaining and maintaining dominion over the world by one particular group of humans. I would like very much to believe that this is not the case. I would like to believe Smarr’s optimistic comment (quoted in Broad, 1993):

It’s the one unifying technology that can help us rise above the epidemic of tribal animosities we’re seeing world wide. One wants a unifying fabric for the human race. The Internet is pointing in that direction. It promotes a very egalitarian culture at a time when the world is fragmenting at a dizzying pace.

This may be yet another example of wishful thinking, though. And I worry that scholars in computer ethics may contribute to the problem, if they do not fully realize the importance of their undertaking. It seems to me that, unfortunately, the scholars who have chosen to explore the field of computer ethics have been too modest in defining the area of investigation, as well as the importance of the subject.

End Notes

  1. An earlier version of this paper was published in the April 1996 issue of Science and Engineering Ethics.
  2. The fact that print did not revolutionize life in China the way it did in Europe is itself an interesting subject for analysis.
  3. Timetables for the Industrial Revolution vary greatly depending upon sources and criteria. The timetable chosen by Moor is very popular, but the view that the Industrial Revolution began with the invention of the printing press is very popular as well.
  4. Of course, the printing press was not the only cause of such profound changes, but neither was the steam engine or the spinning machine. I do recognize the tremendous complexity of the processes we are talking about.

References

Broad, William J. (1993) “Doing Science on the Network: A Long Way From Gutenberg.” The New York Times; Tuesday, May 18.

Górniak-Kocikowska, Krystyna (1986) “Dialogue – A New Utopia?” (in German), in Conceptus. Zeitschrift für Philosophie, Jhg XX, Nr. 51/1986, p. 99 – 110. English translations published in Occasional Papers on Religion in Eastern Europe; Princeton, Vol. VI, No. 5, October 1986, p. 13 – 29 and in Dialectics And Humanism; Warsaw, Vol. XVI, No. 3 – 4/1989, p. 133 – 147.

Gotterbarn, Donald (1992) “The Use and Abuse of Computer Ethics” in Terrell Ward Bynum, Walter Maner and John L. Fodor, eds., “Teaching Computer Ethics,” Research Center on Computing & Society, 1992, pp. 73 – 83.

Grun, Bernard (1982) The Timetables of History: A Horizontal Linkage of People and Events. New, updated edition. Based on Werner Stein’s Kulturfahrplan, New York, Simon and Schuster Touchstone Edition.

Johnson, Deborah G. (1994) Computer Ethics, second edition; Englewood Cliffs, NJ, Prentice Hall.

Moor, James H. (1996) “Is Ethics Computable?” Metaphilosophy, Vol. 27, pp. ??

Moor, James H. (1985) “What is Computer Ethics?” Metaphilosophy, Vol. 16, pp. 226 – 275.