Autonomous Weapon’s Ethical Decisions; “I Am Sorry Dave; I Am Afraid I Can’t Do That.”

AUTHOR
Don Gotterbarn

ABSTRACT

Approaches to ethical analysis can be divided in a number of ways; which ethical theory will be adopted – utilitarianism, Kantianism, Aristotleanism; which reasoning methodology with be used – algorithmic, heuristic. There is an orthogonal relation between the modes of reasoning and the ethical theories. In addressing ethical problems the analyst can take a heuristic or an algorithmic approach to a Kantian analysis. The resulting ethical judgments are maintained with varying degrees of certitude depending on the complexity of the situation. AN algorithmic approach can be automated in software. Examining the ways in which ethical judgments are being automated will shed some light on ethical analysis in general.

Computing has been used to guide, control and monitor unmanned devices. Unmanned devices have been used in a variety of sectors, e.g., searching in dangerous mines, removal of nuclear waste, etc. The military has recently advocated the use of unmanned aerial vehicles (UAV) in part because of their increased endurance and the removal of humans from immediate threat. Currently these devices still require several human operators. However, there is an effort to design systems such that the current many-to-one ratio of operators to vehicles can be inverted and even replace all operators by fully autonomous UAVs.

UAVs were originally primarily reconnaissance devices but some are now lethal weapons. They have been modified to deliver death in precisely targeted attacks and used in ‘decapitation attacks’ or targeting the heads of militant groups. They do this effectively and reduce casualties on the side using them. The decision to take a human life normally requires ethical attention. In many cases the ‘pilots’ fly these lethal missions in Iraq and Afghanistan from control rooms in Nevada, USA. There are at least two relevant ethical problems with this type of remote control. The distances involved provide a ‘moral buffer for the ‘pilot’ which reduces accountability (the need for moral analysis) and may prevent mental damage to pilot for killing others. An increase in automation has an inverse effect on accountability.

The distance and the speed of decision making also leads to ‘automation bias’. There are several types of bias which infect our decision making. Sometimes people have a confirmation bias and seek out information which confirms there prior opinion and ignore that which refutes it. Sometime we modify information which contradicts our ideas to assimilate it into our preexisting ideas. When presented with a computer solution which is accepted as correct people may have automation bias and disregard or not look for contradictory information. There are automation bias errors of omission where users fail to notice important issues and errors of commission where they follow a computerized directive without further analysis.

There is an effort develop ethical decision making models which can be fully automated in fully autonomous ethical UAVs. The UAV (robot) can chose to use force and control the lethality of the force based on rule based systems (Arkin 2009) or value sensitive design (Cunningham 2009).

The decision making process related to UAV’s is similar to the way we make ethical judgments. The management control of UAVs can be designed with several levels of autonomy. A taxonomy of autonomous decision making is analogous to the way we make ethical decisions. The level of autonomy varies depending on external information and constraint and the mode of reasoning to process this information. An analysis of the strengths and weaknesses of the levels of autonomy appropriate to UAV management decisions sheds light on the nature of ethical decision making in general and on how computers should and should not be involved in those decisions.

The analysis of UAV decision support system, uses a taxonomy of levels of autonomy in decision making, an analysis of types of decision bias, and a taxonomy of moral accountability. Using these models in the analysis of approaches to UAV automated decisions it is argued that using a single mode approach – either heuristic or algorithmic to ethical decisions is limited and is likely to lead to poor decisions. An adequate approach to ethical decision making requires both approaches. Further the use of an automated algorithmic approach (implemented in software) to track and reduce the complexity of a problem needs to address automation bias and insure the presence of ethical accountability.

REFERENCES

-Arkin, R.C. “Ethical robots in warfare,” Technology and Society Magazine, IEEE Volume 28, Issue 1, spring 2009

– Collins, W. and Miller, K. “The Paramedic Method”, Journals of Systems and Software, 1992

-Cummings, M.L., Automation Bias in Intelligent Time Critical Decision Support Systems, AIAA 1st Intelligent Systems Technical Conference, September 2004.

-Gotterbarn, D. “Informatics and Professional responsibility”, in Computer Ethics and Professional Responsibility eds. Bynum, T. and Rogerson, S.

-Nissenbaurn, H. “Computing and Accountability,” Communications of the ACM Volume 37 , Issue 1 (January 1994)

-Parasuraman, R., Sheridan, T.B, and Wickens, C.D. 2000. A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics. Part A: Systems and Humans, Vol. 30, No. 3, pp. 286-297. May 2000.

-Ruff, H.A., S. Narayanan, and M.H. Draper. 2002. Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles. Presence 11: 335–351.

-Sharkey, N., “Death strikes from the sky: the calculus of proportionality, ” Technology and Society Magazine, IEEE Volume 28, Issue 1, Spring 2009

-Sparrow, R. “Predators or plowshares? arms control of robotic weapons,” Technology and Society Magazine, IEEE Volume 28, Issue 1, Spring 2009

Citizen Scholar? In Search of a True Democracy of Knowledge in the ICT-Based Global Society

AUTHOR
Krystyna Gorniak-Kocikowska

ABSTRACT

The subject

The subject of this paper is the problem of knowledge in relation to the idea of democracy. Democracy is understood here mainly in its basic form, i.e, as „government by the people” and „a state of society characterized by formal equality of rights and privileges.” (Webster’s Dictionary, 1996) An assumption is made that democracy is the best (especially from an ethical point of view) of all forms of organization of human societies known to humankind thus far. In the context of this paper, the relationship between knowledge and democracy in the ICT-driven global society is of special importance. The definition of knowledge accepted in this paper is that it is a „justified and true belief.” (Quinton, 1972)
The premise

Democracy, both as a theoretical concept and in its practical function, progressed since the end of the 18th Century in the area of politics and in many other areas of public life worldwide. This progress accelerated in recent years due largely to the wide-spread use of advanced Information and Communication Technologies (ICT), which serve as news-, information-, and opinion-sharing devices. The newly coined term ‘citizen journalist,’ already widely used in everyday language, captures this phenomenon very well. Some of the most spectacular examples of the development in question can be also found, for instance, in Barack Obama’s election campaign. This campaign is generally regarded as a case of ‘democracy at work’ – mainly because of the immense role ICT played in the successful grass-roots movement initiated by citizens supporting Barack Obama during his bid for the Presidency of the United States. ICT also served as a very efficient medium in a two-way communication between the Candidate and his electorate; a practice President Obama promised to continue.

However, a similar development of democracy cannot be observed in institutionalized structures involving the processes of production (creation), storage (preservation) and distribution (dissemination) of knowledge, despite the very similar beginnings of both, political democracy and the democracy of knowledge. Just like the modern idea of democracy, so too the modern structure and organization of institutions in which the processes of production, storage, and distribution of knowledge take place are the offsprings of the Scientific Revolution which started in Europe in the 16th Century. Indeed, these institutions very often fulfill their functions in support (or at least in acceptance) of the principles of democracy applied to political life. Many – for instance, some universities, research centers, etc. – were created with this particular mission in mind. Yet, while the political democracy developed progressively over time the progress in the area of democracy of knowledge seems by comparison significantly slower or even arrested.
The main thesis

The main thesis of this paper is that today ICT serves as an instrument of global democratization of all three above-mentioned knowledge related processes: production, storage, and distribution. It happens first and foremost because present-day ICT makes access to knowledge much easier and much faster than was possible in the past. Moreover, a much larger number of individuals than in the past has today access to all kinds of knowledge tahnks to ICT. After a long period of near stagnation, the process of democratization of knowldge takes place in a way similar to that which is noticable in political democracy and in many other areas of public life. In particular, the progressing ICT-enhanced democratization of knowledge shares many similarities with the evolution of democracy which took place throughout the 20th Century and in the first decade of the 21st Century in politics and in many public institutions and structures worldwide.

The arguments

To support the thesis of this paper and to illustrate my point, I will rely mainly on two well known phenomena. One of them is the historic development of the idea of democracy and its ‘practial application’ to the public sphere. I will compare this development with the historic development of the idea of knowledge in a democratic society and its ‘practical application,’ in the knowledge-related institutions and organizations.

The second phenomenon is one of the most prominent problems debated in relation to ICT. Namely, it is the existing dychotomy in the approach to the issue of the ownership of knowledge. This dychotomy is most prominently reflected in the controversy between the supporters of intellectual property rights on the one hand and the supporters of knowledge sharing on the other hand. I will examine this controversy in the context of the idea of democracy as well as from the point of view of flourishing ethics (Bynum, 2006) and flourishing human society. I will focus on the process of production, storage and distribution of knowledge taking place in institutionalized formal structures such as universities versus the same processes taking place informally between human individuals using ICT. I will argue that the latter spur the emergence of a ‘citizen scholar.’ Thus, they contribute to the flourishing of the (global) human society organized on the principle of democracy.

The conclusion

The paper’s conclusion will be devoted primarily to the problem of social implication of the emergence of a ‘citizen scholar’ resulting from the progressing democratization of the ICT-driven global society, and from the democratization of knowledge. The value of the activities of ‘citizen scholars’ will be measured by the quality of their contribution to the flourishing of the global society. The flourishing society is regarded here as an ethical and social ideal.

REFERENCES

Bynum, T.W. (2006), Flourishing ethics, Ethics and Information Technology, 8:4, 157-173.

Quinton, A. (1972), Knowledge and Belief, Edwards, P. (ed.in chief), The Encyclopedia of Philosophy, Macmillan Publishing & The Free Press, vol. 4, 345-352.

Webster’s New Universal Unabridged Dictionary (1996), Barnes & Noble Books.

Subsumption Ethics Redux: as ICT Moves Forwards, Backwards and Sideways

AUTHOR
David H. Gleason

ABSTRACT

I. Introduction

“Subsumption Ethics” was first published at CEPE 11 years ago and much has changed in ICT since then. In particular, the pace of communication has increased while the friction on information transfer has decreased. Web 2.0 and 3.0 functionality is now mainstream, and many users share their most personal information with impunity, from, “bored at home, going to do laundry,” to “I know I shouldn’t have slept with….” This rapid, and often vapid exchange of information calls for a new look at the ethics of current “subsumed objects,” and whether the basic principles of Subsumption Ethics are still applicable.

Many positive steps forward have been made in the last decade, for example, on-line data integration has improved dramatically; grassroots organizing is facilitating the democratic process; many more people have access to good medical information, etc. Some backwards developments include quiet, closed-door compromises with high-stakes information like electronic health records, electronic voting and on-line banking. Some sideways changes include movement from local data centers to on-line servers; massive, inexpensive, redundant storage; and integration of handheld devices into the Web.

After a brief review of the concept of Subsumption Ethics (SE), this paper will provide a series of subsumption in current ICT examples, covering such issues as social networking (Web 2.0); virtual machines, on-line applications and software as a service (Web 3.0); and Internet time.

The paper will show the benefits of applying SE principles to these examples. Finally, it will present a series of specific recommendations to help improve the ethical outcomes of ICT activities.

II. Subsumption Ethics

“Subsumption ethics” is the process by which decisions become incorporated into the operation of information and communications technology (ICT) systems, and subsequently forgotten.

ICT systems, by nature, repeat operations over and over. If those operations have unethical impacts, the system will nevertheless continue to execute them anyway. Unlike a human operator, there is no point in the cycle where the machine pauses to ask, “Should I do this?”

Subsumption in general is the process of building larger components from smaller ones. In ICT systems, small components, like the code that reads data from disk drives, are developed and tested, and once they are working reliably they are subsumed into larger systems that present file lists. Thus we stand on the shoulders of giants.

The larger systems, in turn, are subsumed into still larger systems. Once components, subsystems and applications are operating, the subsumed processes become invisible and unavailable to the user, what Dartmouth Professor James Moor calls the “invisibility factor” – components (and the human impacts of their operation) are forgotten, requiring no further attention unless they fail. These components are called “Subsumed Objects”. Technological systems are built from such objects until they are enormous.

As Aristotle pointed out, virtuous decisions require informed balance between many factors. Developers and users must actively seek understanding of many issues, on continua from stakeholder analysis to technical limitations. In order to apply the right knowledge to the right problem, an informed, deliberate decision-making methodology is required.

Furthermore, ethical decisions need to be made as situations arise and a simple, universal statement of ethics is not possible. The ethics of each situation must be worked out according to specific circumstances, in relation to guiding principles.

III. What’s New – Examples

A. Social networking (Web 2.0)

  • Subsumption without thinking – twittering about taking out the trash
  • Electronic grassroots organizing – on-line services like Democracy In Action
  • Privacy and digital images
  • Acceptable use policies, e-mail retention policy and legal discovery; the difficulty of deleting

B. Virtual machines, server management and portability

  • System configuration files
  • Life cycle management
  • Can mitigate the risks of poor decisions by making them easier to undo

C. On-line applications and software as a service (Web 3.0),

  • Switching to Microsoft Office 2007,
  • Automatic installation of affinity software (e.g. installing the Yahoo toolbar during a routine update).
  • The value of time when machines don’t work correctly

D. Internet time

  • Instant replication of a stolen credit card list, propagation of the news of Michael Jackson’s death, a car bomb in Chechnya, and the 24-hour news cycle.
  • Hive mind and the acceleration of “The Social Construction of Cyberspace”

E. New examples of subsumption ethics

  • Why computers slow down as they subsume more and more material – from updates to malware
  • The Citicorp tower case
  • Tattoos, brands and trademarks

IV. SE Applications and Potential Benefits

  • Saving money and time
  • Open source software
  • “Happy customers feeling safe and secure”
  • Avoiding litigation
  • Quality improvements

V. Recommendations

Support and contribute actively to on-line user groups. Comment on software improvement opportunities, publicly critique errors in systems and judgment. Foster discussion on abuses of ICT power by governments, corporations and individuals, from invasion of privacy to anti-trust activity to malicious hacking.

Teach the concept of SE at the college and graduate levels; help professionals to understand how their decisions become embedded into systems. Demonstrate the cost-benefit of thinking ahead to systems developers.

All this demands a great deal of ethicists, who must embrace web 2.0 functionality, including blogs, social networking sites, grass-roots organizing systems and interactive, multimedia publication in order to move the industry forwards and not backwards in its service to humanity.

The Ethics of the Generalized Sousveillance

AUTHOR
Jean-Gabriel Ganascia

ABSTRACT

Generalized sousveillance

A spectre is haunting the contemporaneous world, the spectre of “Nineteen Eighty-Four” [9], the famous Orwell’s novel. With webcams, RFID tags, remote sensing techniques and many other new information technologies, it now becomes possible to continuously record all the daily activities of everybody [1, 8]. Moreover, in many developed countries, personal data concerning health, employment, incomes, travels and digital communications are officially traced and stored in data bases. It is then possible to fuse all those data using modern data mining techniques. Many people fear the surveillance society that could result from the generalized use of those techniques.

However, the continuous record of all individual data, i.e. the constitution of personal digital archives, and their public dissemination through the web may receive a completely different interpretation. They contribute to establish a state of total transparency among people. According to some researchers, for instance to Steve Mann, this would not really lead to a generalized surveillance society, but to a regime of “sousveillance” [7], where the powerful people are permanently under the watch of those whom they dominate. For instance, if the police beat youths in the street or on a platform subway, the use of mobile phone makes every onlooker able to record and to publicly spread videos of this event. Recently, the 20th of June 2009, during the demonstrations of protest against the results of the Iranian presidential elections, a young woman, Neda Agha-Soltan, was shot in the chest. Immediately, her tragic death was captured on video by bystanders and broadcasted over the internet, which drew the international attention, while in old totalitarian countries such information would have been totally ignored. Such a story characterizes the society of sousveillance where everything can be seen by everybody, without the knowledge of the powerful people, even if they prohibit information dissemination. This clearly opposes to the local surveillance societies, which have dominated the 20th century.
Panopticon vs. Catopticon

The Panopticon has been designed in the end of the 18th century by Jeremy Bentham, as architecture for prisons [2]. It was supposed both to decrease the cost of surveillance and to improve its efficiency. Many philosophers, whom Michel Foucault in “Surveiller et punir” [4] was among, described it as a typical “dispositif” of the modern legal state, i.e. as a social arrangement that summarizes the underlying political structure of the society. Briefly speaking, the Panopticon is built on a ring around a central tower, where inspectors can see all the actions of prisoners. The cells are transparent, which means that they receive and transmit the sunlight. In that way, the inspectors may observe every movement of the prisoners without being viewed. Moreover, the prisoners are totally isolated from each others. To summarize, the Panopticon principles are: 1- the total transparency of the peripheral cells, 2- a fundamental inequality, which makes the occupants of the central tower, i.e. the observers, watching all the occupants of the periphery, i.e. the prisoners, without being watched, 3- the isolation of the prisoners who can’t communicate each others.

In a recent paper [5], we have shown that, by analogy to the Panopticon, that schematizes the surveillance society, the generalized sousveillance gives birth to another social arrangement that we call the “Catopticon”. The three fundamental principles on which the Catopticon is built can be compared with – an opposed to – the three fundamental principle of the Panopticon : 1- the total transparency of society, 2- the fundamental equality, which gives everybody the ability to watch – and consequently to control – everybody, 3- the total communication, which makes everyone able to exchange to everyone. In practice, it means that there is no hierarchy, i.e. no central tower, and that everyone may communicate to everyone in a total transparency.

There are many examples that show the existence and the modernity of the Catopticon [6]. For instance, due to the extensive use of information technologies, the modern subway is a Catopticon, while the classical 20th century subway was organized on the model of a Panopticon. More generally, the contemporaneous infosphere is mainly structured as a huge Catopticon [6] that is extended both to the entire planet and to the world of informational organisms, i.e. “inforgs” by reference to Floridi terminology [3].
Ethical Issues of the Panopticon

During the past few years, most of the computer ethics issues were related to the figure of the Panopticon, which acted negatively, showing what to avoid. More precisely, ethical attitudes were mainly motivated by the Panopticon of which they must prevent the achievement. It means, by reference to the characteristic structure of the Panopticon, that it is necessary both to restrict the ability of the central tower occupants to observe the occupants of the periphery and to make the occupants of the periphery able to freely communicate among them. For instance, the notion of privacy defines the limits of the transparency that makes a central power able to gather personal informations about the occupants of the periphery and to misuse them. Concerning the civil society, ethical questions are related to the lack of free and the authentic communication among people , e.g. to identity usurpation, and to the cyber-criminality. Lastly, the way the free speech and the democracy are influenced by the development of information technology defines new relations between the central power and the peripheral.
Ethical Issues of the Catopticon

Our goal, in this paper, is to show that many modern ethical issues are not directly related to the Panopticon, but to the Catopticon. More precisely, the main problems not only concern the privacy and the emergence of a totalitarian state in a hierarchical society, but also the anonymity and new distinction processes in an equalitarian society. Those processes are mainly based on the use of search engine, as Google, on voting procedures and on reputation establishment, like in eBay. Their economical and political roles are more and more important in the information society. However, many techniques – e.g. “Spamdexing” – tend to bias those distinction processes, which could generate new inequalities, new discriminations, new unfairness and new injustices. We claim that the Catopticon helps to understand those new ethical issues [6].

REFERENCES

[1] Bailey, J. and Kerr, I. (2007), The experience capture experiments of Ringley & Mann, Ethics and Information Technology, Springer Netherlands, Volume 9, Number 2 / July 2007, 129-139

[2] Bentham, J. (1838), Panopticon or the Inspection House, The Work of Jeremy Bentham, volume IV, 37-172

[3] Floridi, L. (2008) Information Ethics, its Nature and Scope, in: Jeroen van den Hoven and John Weckert (eds.), Information Technology and Moral Philosophy, Cambridge University Press, Cambridge

[4] Foucault, M. (1975), Surveiller et punir, Gallimard, Paris, France, p. 252 – In English Discipline and Punish, trans. A. Sheridan. (1977) New York: Vintage.

[5] Ganascia, J.-G. (2009a), The Great Catopticon, in proceedings of the 8th Computer Ethics and Philosophical Enquiry conference, June 2009, Corfu, Greece.

[6] Ganascia, J.-G. (2009b), Voir et pouvoir: qui nous surveille?, Editions du Pommier, Paris (in French).

[7] Mann, S. (1998) ‘Reflectionism’ and ‘diffusionism’: new tactics for deconstructing the video surveillance superhighway. Leonardo, 31(2): 93-102.

[8] Mann, S., Nolan, J., Wellman, B. (2003), Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments, Surveillance & Society 1(3): 331-355, http://wearcam.org/sousveillance.pdf

[9] Orwell, G. (1949) Nineteen Eighty-Four, Secker and Warburg, London, UK.

E-Exclusion and the Gender Digital Divide

AUTHOR
Georgia Foteinou

ABSTRACT

In the era of the digital revolution most governments around the world adopt relevant technological strategies and try to construct the so called “e-government”. However, what should be done is not obvious and the decision to follow the strategy of the most developed countries is not always a wise choice. What should be taken into account is not only the technological infrastructure and the citizens every day needs but also the specific cultural and legal environment and the future trends, otherwise the evolution of e-Government may be a path which leads society backwards.

The modern European society faces a number of challenges caused by many factors (social, economical, political) and the evolution of ICTs in the social and political life accelerates the process of change. But, where this process leads and how we can measure the results? Billions of euros are spent every year for e-Government infrastructure, while the European Governments still haven’t found an effective way to measure the results. The e-Government Economics Project (eGEP) recommends three value drivers for the measurement of impacts and outcomes of e-government: efficiency, democracy and effectiveness. Also, states that “metrics for the public value potentially generated by e-Government should not be limited to quantitative, financial impact” and suggests the adoption of other qualitative indicators, such as users’ satisfaction. This qualitative evaluation of e-Government raises other basics questions, such as: is it possible to measure democracy and how? And what actually means “user’s satisfaction”. Has this expression the same meaning for someone who lives in Turkey and someone who lives in Norway? And of course we have to determine and define who the users are: are the citizens? are the politicians or the employees in public administration? Are the women or the men? Is there a gender dimension in e-Government? Which is the basic view of the system and in what way do we classify the citizens into users categories; according to which attributes and characteristics?

The research reveals that the gender dimension is present on e-Government systems and sometimes may lead to what we call “Gender Digital Divide”. The most impressive is that this Digital Inequality may co-exist synchronously with an official governmental policy for the alleviation of this inequalities. Why a governmental policy may be contradictory to another is a political issue, but the ability to examine the results of these policies may unveil that the ICTs is a tool for other purposes other than the social welfare. For example ICTs may be a tool for the empowerment of women or instead a tool for applying extensive control over women, and of course what is ethically right and what is legally right is up to a specific sociopolitical environment. There are no general ethical rules and the problem emerges in a such multi-cultural and multi-national area as Europe. The nations are not unified but the systems sometimes should be integraded.

This paper presents the case of the Greek TAXISnet and the social implications of this e-service. TAXISnet (taxation information system) is the most successful Greek e-Government system which offers services directly to the citizens through a web site. A variety of services concerning taxation issues are fully available electronically to the public while the system exploits existing information infrastructures [Stamoulis et al., 2001]. It has extremely high rates of usage in enterprises (which reach 80% of the Greek enterprises) and the highest rates in citizens, comparing to other Greek e-services. Actually, this service is the only well-known e-Government service in Greek population and probably its great performance is the reason why e-Government services in Greece have satisfactory usage rates, compared to other EU countries (although they are still under the average EU rates).

TAXISnet had an overall budget of 60m euro and it was funded by the Greek Government and the European Union. Until today it remains an efficient and well running information system which saves millions of euros every year for the Greek government [Stamoulis et al., 2001]. However, a recent evaluation regarding the social aspects and the citizens’ satisfaction of TAXISnet revealed that the system has many weaknesses. This is due to a lack of support for people with disabilities, for immigrants, for foreigners and for other social groups [Terzis & Economides, 2006]. Moreover, the system does not permit actual access to any married woman although it gives “access rights” to them. The man – the husband – has alone right and responsibility to use the system and to declare the family income. The Greek tax law which was fully implemented in the case TAXISnet caused a number of gender-related issues and brought to the fore gender inequalities from past years.

The women pay of course their taxes and give details for their personal income and property to their husbands but they have no right to access the system and to see the tax declaration which concerns their income. This happens because the husband’s personal data are fully protected by the Greek law. Hence, the spouse has no right to see it. But what is happening with her personal data and why an e-government system eternalises a social inequality? What are the implications of this policy?

Of course the official statistics concerning the e-Government usage in Greece show that there is a gender dimension and the gender digital divide is evident, but in what degree that’s a social attitude and in what degree is the result of a governmental policy? The consequences, positive and negative, of a governmental policy may be huge. Can we consider Greek TAXISnet as a successful digital service? It has of course substantial economic benefits for the citizens, but it is contradictory to principal ethical values. The citizens do not “feel equal” when they access the service, because what someone can or cannot do depends on his or her gender.

The main conclusion is that the aforementioned e-service exacerbates the existing discriminations and we can even suppose that it creates new ones by preventing young, highly educated women from using e-Government. The potential female users of e-services are wealthy, well educated women who, until recently, had never felt what really means gender discrimination. Moreover, this case study indicates that many factors can produce biased statistical results, because of a lack of understanding of the context of use.

REFERENCES

Huyer S. & Siloska T. (2003) “Overcoming the Gender Digital Divide: Understanding ICTs and their Potential for the Empowerment of Women”, Instraw Research Paper Series No 1.

Stamoulis D., Gouscos D., Georgiadis P., Martakos D., (2001), “Revisiting public information management for effective e-government service”, Information Management & Computer Security, vol. 9(4), pp. 146-153

Hacker K. & Mason S. (2003) “Ethical gaps in studies of the digital divide”, Ethics and Information Technology, vol. 5 pp.99-115

Floridi L. (2001) “Information Ethics: An Environmental approach to the Digital Divide”, Philosophy in the Contemporary World, vol. 9, No 1, pp.1-7.

Foteinou G. & Pavlidis G (2009), “Ethical Aspects of e-Government: Social Actors, Politics and the Digital Divide”, proceedings of the Eighth International Computer Ethics Conference, Corfu 2009.

Electronic Voting Technology, the Software Engineering Code of Ethics, and Conceptions of the Public Good

AUTHOR
William M. Fleischman

ABSTRACT

Since the introduction of electronic voting systems following the passage of the Help America Vote Act (HAVA) in 2002, numerous studies have disclosed serious and unsettling flaws in virtually all of the electronic voting devices marketed and in use in the United States. Since HAVA was intended to prevent problems like those encountered in the contested 2000 Presidential election, these shortcomings have created the unsatisfactory situation in which the purported remedy for problems associated with the conduct of fair elections has in actuality served to intensify public doubts about the electoral process.

The potential flaws identified by these scientific studies of existing electronic voting devices include susceptibility to the installation of malicious software resulting in the intentional theft of votes with the further possibility of modification of records, audit logs, and counters to frustrate subsequent forensic examination of the election process, denial of service attacks that disable the use of these devices either at the start or during the course of an election, as well as accidental “flipping of votes” and other anomalies caused by careless handling or unskilled actions of election workers. In addition, experience in the use of electronic voting devices in recent elections has confirmed their fallibility. Ballots have been inexplicably lost from or added to vote totals, direct recording electronic devices (DREs) have provided incorrect ballots, machines have failed to operate at the start of voting and have broken down during the course of an election, memory cards and smart card encoders have failed during elections.

The existence and widely publicized knowledge of the vulnerability and unreliability of electronic voting machines serves to undermine the public’s confidence in the integrity and accuracy of the electoral process and goes to the heart of the essential bond of trust between citizens and their government. Thus, these are clearly matters that threaten the public good.

Many of the flaws that have been uncovered are directly attributable to poor software engineering practices in the design and development of electronic voting devices. The Software Engineering Code of Ethics, which “expresses the consensus of the profession on ethical issues,” places concern for the public interest as its highest priority, the first of the eight principles that express the ethical aspirations of the profession. How are we then to square the uniformly questionable quality of electronic voting devices designed to carry out the intent of HAVA with the fidelity to their code of ethics of the several independent teams of software engineers for different companies who participated in the design and development of these devices?

It may well be, as some suspect, that it is an intractable problem in software engineering to produce electronic voting devices that solve simultaneously the critical functions of protecting the secrecy of each voter’s ballot, protecting the security of the voting process, providing an interface that is sufficiently easy to use for all voters – including those with disabilities related to vision, hearing and mobility – and assuring the integrity and accuracy of the final ballot count. Although these questions are implicitly raised by research efforts investigating vulnerabilities and deficiencies of current electronic voting devices – and although these investigations have spurred a vigorous discussion in the profession concerning electronic voting – there remains the responsibility that rests with any practitioner involved in the development of these devices to take care to implement best practices (for example, in regard to encryption) and to acknowledge the possibility that even the most carefully designed device may fail to meet the exacting standards necessary for the conduct of elections in a democracy.

In their recent paper, “The Public Is the Priority: Making Decisions Using the Software Engineering Code of Ethics,” Professors Donald Gotterbarn and Keith Miller “present three cases – one fictional and two based on news reports – that illustrate how a software professional can use the Code as a decision-making aid when ethical conflicts arise.” In their examination of the difficult conflicts arising in the three cases they consider, the authors provide much valuable guidance for practicing software engineers whose consciousness is supported by a robust conception of the public good.

The contention of the present work is that the distressing phenomenon of uniformly poor standards of software engineering practice evident in the design and development of existing electronic voting devices is a symptom of something more fundamental and more dangerous than inattention to the Code of Ethics itself. It is that the very conception of what constitutes the public good, what it means to act in the public interest, has been degraded and attenuated to the point that, in this case at least, it is difficult to imagine that the principle of acting in the public good can even have penetrated the consciousness of the members of the various software engineering teams. In this connection it is pertinent to recall Terry Winograd’s admonition that “we can take obliviousness as a key sign of” behavior that is ethically deficient.

In this paper, we will explore some of the factors that have resulted in the degradation of the concept of acting in the public good. We will present the results of informal as well as more structured surveys of how inattention to the importance of education in citizenship undermines concern for the public good. This inattention to education in the meaning of being a good citizen extends from the earliest years of a young person’s formation through his or her experiences in secondary school and university. It combines powerfully with what young professionals learn from stories of the unexemplary behavior of public officials, members of government, and leaders of industry and finance to nourish a cynicism that argues concern for the public good out of existence. Finally, we discuss some possibilities for raising the level of awareness of civic responsibility, especially in the education of those aspiring to careers in software engineering.