Conditions for an Effectiveness of Ethical Reflexivity in ICT-Based Projects: From Theory to Practice.

AUTHOR
Philippe Goujon, Marco Marabelli, Catherine Flick, Alesssia Santuccio and Federico Rajola

ABSTRACT

The impact of techno-scientific developments on societal evolution and lifestyles no longer needs to be demonstrated. In particular the last half of the twentieth century has witnessed a considerable acceleration of the integration of technological elements into the means of economic production and social life in general. The profound transformations that have taken place in the last few decades equally involve energy, transportation, construction, telecommunications, administration, medicine, pharmacy and agricultural sectors. These transformations are closely linked to techno-scientific developments and particularly to stunning developments in Information and Communication Technologies (ICTs).

The rapid change and evolution of ICTs presents opportunities for social interaction and the management of life activities in new and often unfamiliar ways. The diversity of use and application areas brought about by the convergence of different media offers great potential for enhancing many aspects of living. At the same time, the main characteristics of these technologies that lend themselves to inspiring visions of the future (such as the Ambient Intelligence environment) also hold for potential negative ethical impacts. Some ethical issues are now familiar, privacy for example, but even so it might be hard to identify potential risks in new applications and contexts, especially if we take into account the extent to which new technologies are now enveloped in every day human activities. Others are less obvious and likely to become harder to identify since nowadays ICT is becoming ‘seamless, unobtrusive and often invisible’. In other words, the growing process of incorporating ICT into human activities conditions behaviors and apparently the process is often unconscious or not clearly perceived by the users.

Unfortunately not all projects with technical development sufficiently integrate the ethical issues that arise. In particular, the governance of ethics is often missing and no guidelines have been provided so far both at the EU and international level. These difficulties have been recognized at EU level, resulting in the attention paid to ethics and ICT in EU Seventh Framework Program (FP7), and to some extent in UE Sixth Framework Program (FP6) -for instance the ETHICBOTS project, the MIAUCE project, and the SWAMI project). Approaches that address the challenges vary, and are often presented as different ways to identify potential ethical issues at some stage in the research project.

It is insufficient to determine and address the ethical problems raised by the ICT from a theoretical perspective if such approaches have no practical impact and remain external to the development of the technical project itself. Ethical considerations are by themselves insufficient to settle the problem of the relationship between ethics technologies and society, above all in the field of ICT and emerging technology such as ambient intelligent systems, and the alignment between projects development and ethics is far from being achieved. In addition, the positive approaches of the social sciences, even in their applied versions to an ethics of technology (e.g. ‘sociology of morals’), even if doesn’t encourage cognitive and normative reflexivity, can reinforce the efficiency of instrumental methods which are typically those of the ‘social engineering’.

The risk is that by not addressing the conditions (institutional, rules, cognitive) for the effective integration of those considerations in the context of a technical project, the ethical considerations will be excluded from the technical rationale and treated as a totally separate domain. The consequence of this separation is a loss of impact, and an undermining of the integral role of ethics in the application of technology (I would rewrite this last sentence. I got the message but I think should be more clear since I feel this is a fundamental point, we should ask Catherine to rewrite it since she’s mother language. I would change the word “undermining”)

Thus, this article aims to address the conditions to identify the ethical issue incorporated into ICT, and takes into consideration the problem of the ethical issues resolution, which is based on the consideration of assuring the efficiency and the effectiveness of ethical reflexivity in the technological development itself. In turn, our approach is not limited to consider ethical issues as a sheer sectorial approach with the consequence that the debate will be reduced just as a mere application of a priori accepted principles, in fact, the latter approach would resume itself to a deduction of consequences from the application of those principles to a perceived context, without taking into account its achievement

Consequently we develop the following research questions:

  • what will be the consequences of scientific and technical rationality for the moral reason ?
  • Is it possible to relativize instrumental rationality? and what are the conditions for its political control?
  • What are the conditions that allow the conditions for the integration, within the structures of power and decision, means of « learning », taking possession of the socio-techniques, ethics and cultural stakes of new technologies ?
  • how we can define a new ethical governance: reflexive, deliberative, ethical… that allows (I think this sentence is to be completed)

Doing so, we will specify a theoretical framework for improved governance mechanisms that identify and address potential ethical issues arising from new and emerging ICT technologies, and at the same time will erase the separation and disjunction that, to take Putnam expression, separates so often the level of justification (theoretical approach and determination of ethical issues) from the level of application (transformation of the context by the application of the ethical reflexivity and determination).

We will respond to the need for a comprehensive ethical governance process that supports deliberation and shared decision-making, and that implicitly carries a commitment to ethical engagement in technology development.

Fundamental Rights {On|Of|For} The Internet

AUTHOR
Andrea Glorioso

ABSTRACT

Since the inception of its first prototypes in the late ’60s, the internet has certainly come a long way. What used to be an extremely small network – originally designed with military and defence purposes, but very soon turned into a communication system for researchers – has nowadays become an essential element for economies and, most importantly, for societies throughout the world [1].

Although the social significance and implications of this new technology[2] was rather clear both to its original designers (perhaps less so to its funders) and to ‘gearly adopters’, [3] it seems fair to say that it is only in the past decade that the full extent and impact of the internet on societies has become clearer – which does not mean, as this paper will argue, that our level of reflection on and understanding of this relationship is fully satisfactory.

More specifically, this paper argues that it is necessary to broaden our understanding of the relationship between fundamental rights – as enshrined inter alia in the 1948 Universal Declaration of Human Rights, in the 1966 International Covenant on Economic, Social and Cultural Rights and in the 1966 International Covenant on Civil and Political Rights[4] – and the Internet. In particular, the paper argues that we should strive to better understand:

Do or should fundamental rights be changed, or at least declined differently, when their exercise takes place on the internet (e.g. Should the right to freely receive and impart information be refined, in view of the technological possibilities for filtering and blacklisting[5] that abound on the internet) ?

Although the internet, as a human-made piece of technology, does not have rights per se, is it useful to frame the discourse around this topic also in terms of the specific technological characteristics that the internet should be guaranteed to have in order for fundamental rights on the internet to be meaningfully exercise (e.g. Should “network neutrality” be mandatory) ?

Are there fundamental rights – and if so, which ones – that must be particularly safeguarded and promoted in order to make sure that the distributed social – but technology-based – ecosystem of the internet is preserved and, even more importantly, updated and developed to cope with new requirements that might arise, so that, as above, the meaningful exercise of fundamental rights on the internet is preserved (e.g. How far should the right to private initiative be balanced against the intervention of public authorities in developing the internet of the future) ?

The paper will provide a legal, technological and policy framework to discuss these three questions, on the basis both of existing literature and of the ongoing policy initiatives in this area, including, but not limited to, the activities of the dynamic coalition on rights and principles in the context of the internet governance forum,[6] of participants to the world summit on the information society[7], of the council of europe[8] and of other relevant players.

REFERENCES

[1] See inter alia Organisation for Economic Cooperation and Development, Declaration on the Future of the Internet Economy, 2008, available at http://www.oecd.org/dataoecd/49/28/40839436.pdf; and even more importantly the OECD report Shaping policies for the future of the Internet Economy and its annexes (available respectively at http://www.oecd.org/dataoecd/1/29/40821707.pdf and http://www.oecd.org/dataoecd/1/28/40821729.pdf).

[2] In this contribution the definition of ‘technology’ introduced in R.G. Lipsey, K. I. Carlaw, C. T. Bekar, Economic Transformations – General Purpose Technologies and Long Term Economic Growth, Oxford University Press, 2005, p.58, will be used, i.e. “the set of ideas specifying all activities that create economic value […] comprising (1) knowledge about product technologies, the specifications of everything that is produced; (2) knowledge about process technologies, the specifications of all processes by which goods and services are produced; (3) knowledge about organisational technologies, the specification of how productive activity is organised in productive and administrative units for producing present and future goods and services”.

[3] See for example S. Levy, Hackers – Heros of the Computer Revolution, Anchor Press, 1984.

[4] As mentioned, the references to these three instruments should not be interpreted as limiting the scope of the necessary analysis. Many other instruments, some of which are available for consultation at http://www2.ohchr.org/english/law/, play a central role in this reflection.

[5] See inter alia R. Deibert, J. Palfrey, R. Rohozinski, J. Zittrain (eds.), Access Denied: The Practice and Policy of Global Internet Filtering, MIT Press, 2008, as well as the reports of the OpenNet Initiative (http://opennet.net/) and their proposed taxonomy (http://opennet.net/about-filtering).

[6] See http://www.internetrightsandprinciples.org/.

[7] See http://www.itu.int/wsis/.

[8] See http://www.coe.int/ and in particular the declaration on freedom of communication on the internet adopted by the committee of ministers on 28 may 2003 at the 840th meeting of the ministers’ deputies) and the resolution on internet governance and critical internet resources (adopted by the conference of ministers on 29 may 2009 at the 1st council of europe conference of ministers responsible for media and new communication services).

The Relation Between Human Ethics and Cyborg Ethics

AUTHOR
Anne Gerdes

ABSTRACT

In this paper, I set out to address moral philosophical issues, which arise when human nature undergoes changes and we eventually turn into cyborgs. We are now faced with the fact that we have become masters of evolution, even if we decide in general not to modify the body; it too is an active choice. When our body and brain are attached to technology it will gradually change our ways of being beings in the world. Furthermore, the possibility of technology enhancement also allows for variety among us. Therefore, it is my main concern to make aware of bodily presence within moral philosophy (MacIntyre, 2002) in order to shed light on the significance of embodiment for the development of a common understanding of the good. This background will function as a fundamental framework, which supports analysis about how cyborg ethics will relate to human ethics. Relating on the work of John Rawls, I will discuss conditions for social justice and equality in a future human and or cyborg society. In A Theory of Justice (1972), Rawls defines principles of justice for regulating an ideal society. He presents a model of a fair choice situation, which affords social justice by engaging participants to choose mutually acceptable principles of justice. In a hypothetical situation, participants are acting behind a “veil of ignorance” regarding their own social position in a future society. Given such circumstances, individuals will select principles in accordance with principles of justice. On this basis, I set out to explore the strength of this argument in a future human and or cyborg society.

The description of the conversion into cyborgs is laid out by Pearson (2000). The first step involves using gen technology in the development of Homo optimus. Next, with the help from bio-technology, we turn to homo cyberneticus and homo hybridus. Finally, with homo machinicus, we are faced with a species with no biological origin, heading for eternal life, an idea also reflected by Tipler in his book with the subtitle: Modern Cosmology, God and the Resurrection of the Dead (Tipler, 1994). Additionally, the idea that humanity should strive for enhancing itself is reflected in the philosophy of transhumanism, and also Moor argues in favour of enhancement – with reference to the autonomy of the responsible individual (2005, 9. 129). This is already taking place; people have implants for therapeutic purposes, and professor Kevin Warwich (2002, 2003) is actively experimenting with implementing technology into his own body. One of his experiments concerns the possibility of future distributed cognition and implies coupling machine and human nervous system (Warwick, 2003. P. 134). Warwick also touches upon ethical dilemmas, which arise from future scenarios, as well as from his own self-experimentation projects. Warwick outlines dilemmas, such as: should everybody have a right to upgrade to a cyborg? How about the clash between free will and computer control of thoughts? (Warwick, 2003, pp. 135-136). But he does not establish a framework for dealing with these issues. Rather than speculating further on the basis of empirical cases and future scenarios, I would like to call for reflection upon the moral philosophical consequences of evolving into species that are no longer characterized by the kind of sameness, which has been a fundamental human condition for homo-sapiens.

In his famous paper What Is It Like to Be a Bat? (1974), Nagel discusses the mind-body problem and criticizes physical theories of the mind for their reductionist approach to the explanation of our conscious experiences. He points to the fact that we are unable to consider the subjective character of experience without trying to imagine how it would be like to be a given experiential subject (Nagel, 1974, p. 166). Nagel then stipulates an objective phenomenology with the purpose of describing the subjective character of experience in a form comprehensible to beings incapable of having those experiences (Nagel, 1974, p. 166). In order to explain to a blind person what it is like to see, we should at first, well aware that something would still be left out, strive to develop a method allowing us to express in objective terms structural features of perception. Thus, to deal with the relation of mind to brain in the framework of a physical theory, we have to consider the problem regarding subjective and objective experiences.

I agree with Nagel that no reductionist analysis of mental states is fully able to explain the subjective character of experience – the fact that in order to have conscious experiences there must be something it is like to be a given organism (Nagel, 1994, p. 160). On the other hand, I disagree with Nagel on the idea of reaching an objective understanding of the mental through a structural feature analysis, which presumably would allow us to a grasp with greater precision the explanation of mental experiences. In the context of this paper, this disagreement causes me to explore how lack of sameness, with right to embodiment (Lakoff & Johnson, 1999) and vulnerability (MacIntyre, 2002) may challenge the development of future social interaction.

REFERENCES

Lakoff, G. & Johnson, M. (1999), Philosophy in the Flesh – the embodied mind and its challenge to western thought. Basic Books, NY.

MacIntyre, A. (2002), Dependent Rational Animals – Why Human Beings Need the Virtues, Carus Puhlishing Company, Illinois.

Moor, J. H. (2005), Should We Let Computers Get Under Our Skin? In: (ed.) R. J. Cavalier, The impact of the internet on our moral lives. State University of New York Press, NY. (p. 121-138).

Nagel, T. (1974), What Is It Like to Be a Bat? In: Philosophical Review 83, pp. 435-450.

Pearson, I. (2000), The Future of Human Evolution (www.bt.com)

Rawls, J. (1972), A Theory of Justice. Oxford University Press, Oxford.

Tipler, F. J. (1994), The Physics of Immortality. Modern Cosmology – God and the Resurrection of the Dead, Doubleday.

Warwick, K. (2002), I, Cyborg, Century.

Warwick, K. (2003), Cyborg morals, cyborg values, cyborg ethics. In: Ethics and Information Technology, Vol. 5, No. 3, pp. 131-137.

Transhumanism http://www.transhumanism.org/index.php/WTA/index/

Influencing the Ethical Awareness of Young ICT Professionals

AUTHOR
Candace T. Grant

ABSTRACT

Research Question

What are the attitudes of Canadian IT Management students towards the use of ICT? Does it vary with age and IT professional experience? Does a computer ethics course have an impact on their ethical views?

Background

Unethical business practices at companies such as Enron have increased the focus on ethical behaviour from a variety of organizations, government oversight bodies and professional associations (PMI, 2007). They are looking for ways to not only improve the awareness of unethical behaviour but also provide structures and guidance on what individuals should do when it is discovered. (McDougall, 2006)

As Information and Communication Technology (ICT) becomes more pervasive and digitization and dissemination of content become easier, the ethical issues become more varied and more complex (Moor, 1996). ICT professionals are faced with ethical decisions not only as users of ICT themselves, but in their management and support of users outside of ICT who make use of the technology (Gotterbarn, 1991). As a result, there is a high demand for education that provides the skills and knowledge that will be relevant in the workplace.

Education has a positive impact on moral development in general, (Kohlberg, 1969) an in the professions, specifically (Rest and Narvaez, 1994). However to be effective, the participants current thinking must be considered before undertaking an intervention e.g.it is wasteful to focus on discussing the ethical issues of downloading music if the participants don’t engage in it.

The Ted Rogers School of IT Management at Ryerson University in Toronto, Canada, with approximately 800 students, provides a four year Bachelor of Commerce Degree with an ICT Management major. Second-year full and part-time students take a compulsory course in computer ethics. They range in age, in number of years of ICT work experience, ethnic background, gender, and exposure to ethical discussions on the use of ICT. The aim of the course is to make students aware of the ethical issues surrounding ICT and provide some techniques in addressing them.

The Centre for Computing and Social Responsibility, De Montfort University, UK (Prior, Fairwether, Rogerson, and Hawash, 2008) has developed and implemented a process to assess the ethical attitudes of current and future ICT professionals in areas such as the use of electronic surveillance technology and the use of university resources for personal purposes. This process has also been used by a US university with results expected to be available in April 2010. This study will use the De Montfort process to identify student attitudes and the differences as a result of age and number of years of IT work experience. It will also determine whether their views changed during the course.

Methodology

The process will be administered in Fall 2009 to the students in an evening class of the compulsory computer ethics course (ITM407): IT, Ethics and Society. The process consists of a survey, followed by online discussions. The survey will be administered again at the end of the course. The details follow:

The survey, administered at the start of the course, includes approximately 20 scenarios that each describes an ethical dilemma related to the use of ICT and a suggested course of action. The participant is asked to use a 5 point Likert scale to identify how strongly they agree or disagree with the course of action. The survey also gathers additional information on the participants to enable cross tabulations e.g. age range, number of years of ICT related work experience. The top key issues will be selected for more in depth discussion.

  • Discussion topics will be posted in online discussion forums using the virtual learning environment tool, Blackboard. This will allow students to discuss the issues anonymously and surface some of the reasoning behind their answers to the questionnaires.
  • The survey will be administered at the end of the course to measure overall changes in the attitudes of the class.
  • The data from the initial and final survey will be analyzed to determine the students attitudes to specific ICT related ethical issues and whether those attitudes changed during the course and if there is a difference in attitudes based on age or years of ICT related work experience. The data will be presented in a manner suitable for comparison with the studies conducted in the UK and the US.

A Guideline document will be developed that provides a description of the process followed that could be used by other countries or institutions in doing a similar study.

Study Findings

The process administration and data gathering and analysis will be conducted in Fall 2009 with an evening class of approximately 50 participants. Completion is planned for the end of November 2009, giving ample time to complete the final paper by January 2010. The study will provide a summary report of student attitudes at the beginning of the course and how they changed by the end of the course. It will provide a summary of the key reasons or issues that arose both from the online discussions and how it was determined which topics would be used as the basis of the small group face-to-face discussions. It will include a summary of the key points that arose from the face-to-face discussions and the suggestions from the students on future actions.

Conclusions

It is anticipated that this approach can be used with each course to help students identify their own behaviours but also those of the group. The surveys can easily be updated with current issues. The course can be tailored to address the student identified top key issues in the course and their online discussions can drive the issues for discussion.

It is anticipated that because a similar process if followed, that this data could be compared with both the UK and the US study also being conducted in Fall 2009 to determine if there are differences in the key issues or differences in the thinking around the key issues.

REFERENCES

Gotterbarn, D. (1991). Computer Ethics: Responsibility Regained. National Forum: The Phi Beta Kappa Journal, 71, 26-31.

Kohlberg, L. (1969). Stage and Sequence: The Cognitive-Developmental Approach to Socialization. In D. Goslin (Ed.), Handbook of Socialization Theory and Research. Chicago: Rand McNally and Company.

McDougall, P. (2006). Money, Power and Principle. Information Week, April(1083), 20-22,24.

Moor, J. H. (1996). Unique Ethical Problems in Information Technology. Science and Engineering Ethics, 2(2), 266-175.

PMI. (2007). Project Management Institute Code of Ethics and Professional Conduct.

Prior, M., Fairwether, N. B., Rogerson, S., & Hawash, M. (2008). Is IT Ethical? 2006 ETHICOMP Survey of Professional Practice. IMIS.

Rest, J., & Narvaez, D. (1994). Moral Development in the Professions. Psychology and Applied Ethics. Hillsdale, NJ: Lawrence Erlbaum Associates.

Impact of Compulsory Computer Ethics Education on the Moral Judgment of Information and Communication Technology Management Students

AUTHOR
Candace T. Grant

ABSTRACT

Research Question

Does Rest’s Defining Issues Test 2 provide an effective and easy way to determine whether a computer ethics course has a positive impact on the moral judgment of Information and Communication Technology Management Students?
Background

Preparing future Information and Communications Technology (ICT) professionals to deal ethically with the complex ICT-related issues they will face in the workplace is a challenge being considered by many ICT schools at colleges and universities. Ethics became a hot topic in the early 2000’s, with the financial reporting improprieties of WorldCom and the involvement of audit firms such as Andersens in not identifying the discrepancies. It has gathered more momentum with the recent crisis in the financial markets and the inappropriate behaviours of people in positions of authority such as the British MP’s and their inappropriate expense reporting.

Educational institutions, organizations and professional associations are funding many education and training programs. The researcher has developed a compulsory course for second year students in a four year ICT Management degree program and has delivered it twice to over 400 future ICT professionals. The program makes use of some key pedagogical approaches that have been shown to be effective in ethics education, such as OBAL, role playing, dilemma discussions, critical thinking and stakeholder analysis. It is important to demonstrate, to students, employers and the public at large, that the program is effective and meets learning outcome targets.

How can we demonstrate that students leave the computer ethics course, functioning at a higher moral development level than when they arrived? Is there an effective measure that would be easy to administer and thus repeatable with each course? Is there a technique that could be administered as part of the course and feedback provided to students individually to support their personal development?

Bebeau (2002) has done some interesting work in influencing moral behavior in the professions. She has developed interventions, measures and feedback mechanisms to support the ethical development of dentists throughout their program of study. Her work is based on Rest’s Four Component Model (1994) which suggests that there are four processes that affect moral behavior: moral sensitivity, moral judgment, moral motivation and moral character and that educational interventions should address all of them.

The scope of this paper will be to determine if the approach used for developing moral judgment in dental students can also be used effectively for ICT management students. Bebeau (1994) measures the moral judgment level of dental students at the beginning of their course of study and provides individual feedback. Counseling is provided to students with lower than expected levels of moral judgment. Moral judgment is measured again at the end of the program and individual and program changes studied.

Rest’s model suggests that moral judgment progresses through three schema, from “personal interest” through “maintaining norms” to “postconventional”, and that professional ethics should function at the postconventional level (Rest, Narvaez, Thoma, & Bebeau, 2000). In Bebeau’s study, in most cases, there was a positive change in moral judgment. It should be noted that the dental profession is different from the ICT profession in that the dentist’s patient is usually the key stakeholder making stakeholder impact easier to identify. ICT professionals often find it more difficult to relate to stakeholders as they are often involved in developing or supporting technology for a customer they never encounter. Since the scenarios used in the measurement tool are content independent, it isn’t anticipated that this will cause a problem.

Methodology

Rest’s Defining Issues Test 2 (DIT2) (Rest, Narvaez, Bebeau, & Thoma, 1999) has been widely used in a number of environments to study the development of moral judgment and is widely correlated to be effective (King & Mayhew, 2002). It will be administered in Fall 2009 to the students taking a compulsory computer ethics course. The test will be administered at the beginning of the course, feedback provided to the students and the test administered again at the end of the course. The details follow:

  • The material for the Defining Issues Test 2 is provided by the Centre for the Study of Ethical Development at the Universities of Minnesota and Alabama. The Centre provides the forms, processes the completed forms and produces a report with individual findings for each student.
  • The test will be administered at the start of the course. It includes five scenarios with non-ICT related content, for students to read and vote on what they think the protagonist should do. Students are then presented with twelve statements about the scenario to which they respond on their level of agreement to each scenario using a Likert scale. They are then asked to rank the 12 statements in order of importance.
  • The test results provided by The Centre will be analyzed and individual reports returned to the students. Individual sessions will be scheduled with students with lower than expected scores.
  • The test will be administered again at the end of the course with the results returned to the students. Students will have an opportunity to provide feedback on the process and its usefulness.
    The data from the initial and final tests will be analyzed to determine the degree of change in the moral development levels of the class as a whole. It will also be examined to determine differences related to age or years of ICT related work experience.

Study Findings

The study will report the level of moral judgment of students in the class and whether these results are affected by age or years of ICT related work experience. It will also report changes in the moral judgment level that occurred during the course and whether individual counseling showed a similar improvement in moral judgment to those who didn’t have counseling. The study will also assess the effectiveness of the process and whether this could easily be used in subsequent deliveries of the course and what changes need to be made before using it again.
Conclusions

It is anticipated that this approach can be used with each course to help students understand their own moral judgment levels and identify how to develop it. This could be a measure used throughout the program to assess students as they enter the degree program and then at the end as a good demonstration of the change in moral judgment achieved. It could also be used to measure the impact that a specific intervention has on the moral judgment levels of students in the program.

REFERENCES

Bebeau, M. (2002). The Defining Issues Test and the Four Component Model: contributions to professional education. Journal of Moral Education, 31(3).

Bebeau, M. (1994). Influencing the Moral Dimensions of Dental Practice. In J. Rest & D. Narvaez (Eds.), Moral Development in the Professions: Psychology and Applied Ethics, Psychology and Applied Ethics (pp. 121-146). Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

King, P. M., & Mayhew, M. J. (2002). Moral Judgement Development in Higher Education: insights from the Defining Issues Test. Journal of Moral Education, 31(3).

Rest, J. (1994). Background: Theory and Research. In J. Rest & D. Narvaez (Eds.), Moral Development in the Professions: Psychology and Applied Ethics, Psychology and Applied Ethics (pp. 1-26). Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

Rest, J., Narvaez, D., Bebeau, M., & Thoma, S. (1999). A Neo-Kohlbergian Approach: The DIT and Schema Theory. Educational Psychology Review, 11(4), 291-324.

Rest, J., Narvaez, D., Thoma, S., & Bebeau, M. (2000). A Neo-Kohlbergian Approach to Morality Research. Journal of Moral Education, 29(4).

Autonomous Weapon’s Ethical Decisions; “I Am Sorry Dave; I Am Afraid I Can’t Do That.”

AUTHOR
Don Gotterbarn

ABSTRACT

Approaches to ethical analysis can be divided in a number of ways; which ethical theory will be adopted – utilitarianism, Kantianism, Aristotleanism; which reasoning methodology with be used – algorithmic, heuristic. There is an orthogonal relation between the modes of reasoning and the ethical theories. In addressing ethical problems the analyst can take a heuristic or an algorithmic approach to a Kantian analysis. The resulting ethical judgments are maintained with varying degrees of certitude depending on the complexity of the situation. AN algorithmic approach can be automated in software. Examining the ways in which ethical judgments are being automated will shed some light on ethical analysis in general.

Computing has been used to guide, control and monitor unmanned devices. Unmanned devices have been used in a variety of sectors, e.g., searching in dangerous mines, removal of nuclear waste, etc. The military has recently advocated the use of unmanned aerial vehicles (UAV) in part because of their increased endurance and the removal of humans from immediate threat. Currently these devices still require several human operators. However, there is an effort to design systems such that the current many-to-one ratio of operators to vehicles can be inverted and even replace all operators by fully autonomous UAVs.

UAVs were originally primarily reconnaissance devices but some are now lethal weapons. They have been modified to deliver death in precisely targeted attacks and used in ‘decapitation attacks’ or targeting the heads of militant groups. They do this effectively and reduce casualties on the side using them. The decision to take a human life normally requires ethical attention. In many cases the ‘pilots’ fly these lethal missions in Iraq and Afghanistan from control rooms in Nevada, USA. There are at least two relevant ethical problems with this type of remote control. The distances involved provide a ‘moral buffer for the ‘pilot’ which reduces accountability (the need for moral analysis) and may prevent mental damage to pilot for killing others. An increase in automation has an inverse effect on accountability.

The distance and the speed of decision making also leads to ‘automation bias’. There are several types of bias which infect our decision making. Sometimes people have a confirmation bias and seek out information which confirms there prior opinion and ignore that which refutes it. Sometime we modify information which contradicts our ideas to assimilate it into our preexisting ideas. When presented with a computer solution which is accepted as correct people may have automation bias and disregard or not look for contradictory information. There are automation bias errors of omission where users fail to notice important issues and errors of commission where they follow a computerized directive without further analysis.

There is an effort develop ethical decision making models which can be fully automated in fully autonomous ethical UAVs. The UAV (robot) can chose to use force and control the lethality of the force based on rule based systems (Arkin 2009) or value sensitive design (Cunningham 2009).

The decision making process related to UAV’s is similar to the way we make ethical judgments. The management control of UAVs can be designed with several levels of autonomy. A taxonomy of autonomous decision making is analogous to the way we make ethical decisions. The level of autonomy varies depending on external information and constraint and the mode of reasoning to process this information. An analysis of the strengths and weaknesses of the levels of autonomy appropriate to UAV management decisions sheds light on the nature of ethical decision making in general and on how computers should and should not be involved in those decisions.

The analysis of UAV decision support system, uses a taxonomy of levels of autonomy in decision making, an analysis of types of decision bias, and a taxonomy of moral accountability. Using these models in the analysis of approaches to UAV automated decisions it is argued that using a single mode approach – either heuristic or algorithmic to ethical decisions is limited and is likely to lead to poor decisions. An adequate approach to ethical decision making requires both approaches. Further the use of an automated algorithmic approach (implemented in software) to track and reduce the complexity of a problem needs to address automation bias and insure the presence of ethical accountability.

REFERENCES

-Arkin, R.C. “Ethical robots in warfare,” Technology and Society Magazine, IEEE Volume 28, Issue 1, spring 2009

– Collins, W. and Miller, K. “The Paramedic Method”, Journals of Systems and Software, 1992

-Cummings, M.L., Automation Bias in Intelligent Time Critical Decision Support Systems, AIAA 1st Intelligent Systems Technical Conference, September 2004.

-Gotterbarn, D. “Informatics and Professional responsibility”, in Computer Ethics and Professional Responsibility eds. Bynum, T. and Rogerson, S.

-Nissenbaurn, H. “Computing and Accountability,” Communications of the ACM Volume 37 , Issue 1 (January 1994)

-Parasuraman, R., Sheridan, T.B, and Wickens, C.D. 2000. A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics. Part A: Systems and Humans, Vol. 30, No. 3, pp. 286-297. May 2000.

-Ruff, H.A., S. Narayanan, and M.H. Draper. 2002. Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles. Presence 11: 335–351.

-Sharkey, N., “Death strikes from the sky: the calculus of proportionality, ” Technology and Society Magazine, IEEE Volume 28, Issue 1, Spring 2009

-Sparrow, R. “Predators or plowshares? arms control of robotic weapons,” Technology and Society Magazine, IEEE Volume 28, Issue 1, Spring 2009