Information Technologies, a New Global Division of Labor, and the Concept of Information Society

AUTHOR
Andrzej Kocikowski

ABSTRACT

Due to almost 30 years of research conducted by many scholars, including numerous participants in the ETHICOMP conferences, the thesis that the tele-information revolution changed – and it is still changing – all (broadly understood) life processes on earth is well documented.

Nevertheless, there is one fundamental question regarding the impact of the tele-information revolution that still remains open. It is the question of the general theoretic analysis of the capitalist system at its present stage of development (e.g., global economy, the world market). Most importantly, this system needs to be analyzed in light of profound changes that occurred in the area of the productiveness of labor. As is well known, these changes result directly from the technological development and indirectly from the development of scientific research. They are also an indirect result of the tele-information revolution and this revolution, in turn, determines the effectiveness of the process of the production of knowledge.

One may ask, what could change in the capitalist system as a result of the revolutionary change in the collective productiveness of labor?

Before attempting to answer this question, I would like to bring up some rather obvious reminders. A very important component in nearly all production processes is knowledge. It is well known that today knowledge can be generated in basically the same way as the majority of other technically and technologically advanced products. One uses capital to finance the construction and equipment of laboratories; one hires scientists and after some time – as in any other business – one receives the product. This product can be a technical or technological solution to an existing problem; a new technology; new materials to be used for the production of goods; or it can be a plant or chemical or anything else; sometimes, this product is ‘pure theory.’ This product is then sold to order; or it is offered on the market like any other merchandise.

The production of knowledge can be more profitable a business than, for instance, the production of steel or coal mining. In this case, not surprisingly, the capital will move away from the production of raw materials towards the more profitable production of knowledge. Governments can guide this process using rational financial policies, immigration laws, and the like. The actions of the government of the United States can serve as a good case in point here.

The knowledge needed for the production of what people need or want is diverse. These diversity results in part from a variety of ways in which the products created with the use of knowledge can be applied. Another very important factor leading to the diversification of the status and value of knowledge is the significance of a particular knowledge (or technology) for the economy and for broadly understood interests of the country on which territory and with whose money this knowledge has been created. Herein, among others, lies the source of the importance of patent regulations and legal rules, which those who have power use for maintaining the exclusive use of a selected technology, and who even are willing to violate the rights of individuals, nations, and international conventions to obtain this goal.

***

Hardly anyone challenges the view anymore that some to the most obvious characteristics of today’s stage in the world’s economic development are the globalization of business ventures undertaken by corporations and super-corporations, as well as an unheard-of concentration of capital. These two phenomena are the result of decades-long, complex actions in the areas of economy, science, and politics.

The progressing concentration of capital is an immanent feature of the entire capitalist economy; this includes the production of knowledge. Several features immanent to the capitalist system may overlap. For instance (to name but a few):

  • The above mentioned tremendous concentration of capital;
  • The migration of capital to the knowledge producing branches (mentioned earlier);
  • The desire to control the fundamental (strategic) branches of production.

As a result of such an overlap, the global process of the production of knowledge is dominated by individuals and organizations (state-owned as well as private) who control immense capital and who are clustered together in particular geographic areas (e.g., United States, Russia, parts of Europe).

***

We return now to the question asked earlier: What could change in the capitalist system as a result of the revolutionary changes in the collective productiveness of labor? The answer to this question is that the result of changes in the collective productiveness of labor is a new global division of labor, which is qualitatively different from the old one. (Interestingly enough, this answer doesn’t differ much from some of the predictions about the future of capitalism made in the 19th century, for instance, by Karl Marx in his Das Kapital.) Obviously, this is not the only change; but it is one of the most important ones. The gist of this new global division of labor is that a certain, relatively very small, segment of the global population has control (often total) over the production of scientific knowledge and over the production of most advanced technologies. Moreover, this production is physically located in areas controlled and protected by the same segment of the global population. This means further that this segment of the global population is in control of the most crucial instruments of changes in the global collective productiveness of labor; i.e., it has a key advantage over the rest of humankind. This advantage resulted from the tele-information revolution; thanks to this revolution, the knowledge and technologies used for setting in motion processes – economic, political, and recently also biological – which have the most profound impact on the entire global population are now produced on the territories controlled and protected by a very small segment of the global population. Acknowledging this fact permits for a new interpretation of the concept of Information Society. It frees this concept from its hereto existing muddiness and lack of substantial content. To present such a new interpretation of the concept of Information Society is one of the main objectives of this paper.

The Finnish eVoting Experiment: What Went Wrong?

AUTHOR
Olli I. Heimo, N. Ben Fairweather and Kai K. Kimppa

ABSTRACT

In this paper we analyze the validity of claims made in Finland of benefits from an electronic voting (eVoting) system in the context of the recent election there. We also look at the potential harms from an eVoting system and then compare the benefits with the risks. According to our analysis of the discussion in Finland, legal, ethical and technical experts in the field see the benefits to be marginal, whereas discussion in Finland and elsewhere suggests the harms can be fundamental. The main problem with the application is that it has been done without carrying the underlying principles of the paper ballot over to the eVoting system.

Justifications given in Finland for the eVoting system have been the following: ‘cost savings’, ‘activating passive voters’, ‘speed and efficiency of the system’, ‘staying in the front line of ICT-using-nations’, and ‘reliability of counting votes’. The argument was also made that ‘if the banking system can be made secure, why not the voting system?’ The possibility to ‘follow the Estonian way’ (where Internet and Mobile Voting were introduced) was later hinted at, but never really mentioned as a justification, although if the more passive citizens are to be activated, this seems like a logical next step. All these claims are, of course, prima facie plausible. However, most of them do not stand up to close scrutiny, nor are the problems with the stated justifications the only ones with the eVoting system as it was implemented in Finland.

There are problems with the claims, some major, some minor. Cost savings from using the system are questionable at best. Information Systems tend to need updates and modifications from year to year, and according to official reports, at least for now, there have only been extra costs from the system (Valtiovarainministeriö, 2000-2008). Passive voters would still need to show up to the voting location (as there was no Internet or Mobile voting), thus the novelty may have resulted in some new voters, but this is unlikely to last. Clearly, the system is faster in giving results than the traditional paper ballot. However, the time saved: reducing the count from approximately 4 hours to 30 minutes if all (or major part of) voters use the system, is hardly a major advantage if elections are held once every 18 months. ‘Staying in the front line’ of anything can be questioned to be a value in itself. Some justification for why it would be valuable is of course needed. Reliability of the system is questionable – already problems have been discovered with the user interface of the system, the system itself is a black box (‘security through obscurity’), and the functioning which is a mystery to all except the designers and auditors of the system. Internet and/or Mobile voting might actually activate currently passive voters, however, the results from other votes in which these have been used show a jump in activity (novelty value of the application) which then soon fades to almost same figures as before. Also, if the voters are so passive that they cannot, on a bank holiday, be bothered to go vote – should they?

Unfortunately these are not the only problems with the system. Other problems raised include the following: votes are kept together with the voter’s name after the vote is cast, thus effectively eradicating anonymous voting if those holding the keys to the ballot, at any later date, want to know who voted for whom. The implications of this are clear and could be, in a changed political climate, catastrophic. There is, unfortunately no guarantee (check sum or similar) that the software actually used is the same as the audited one. Many critical faults were found in the audited software, but only two of these were fixed in the software used. The software used is proprietary – only a select few, after signing a very restricting non disclosure agreement, were allowed to verify whether the software actually is secure (see e.g. Fairweather & Rogerson, 2002, p. 15 on the need for transparency). The system is, by definition, much more complex to understand than a paper ballot which can be explained to anyone in 15 minutes. By contrast the software, cryptography, etc. involved in an eVoting system cannot (see e.g. Mercuri, 2001 on difficulty to explain systems to laypersons). Finally, there is the possibility of hidden back doors in the software. A hostile takeover of the system (by either national or external parties) is thus much more likely than with traditional paper ballot.

Finally, the user interface of the system malfunctioned due to non-intensive testing. This led into invalidation of the vote in the three municipalities which used the eVoting system. In the media and in the statements given out by the government officials, this was described as user error – when in reality, it was a user interface error (which was actually found in the user tests, but ignored). The user interface did not clearly verify that the vote had actually been recorded, and thus the percentage of lost votes was over three times as high as on average on the paper ballot (0.7% ? 2.36%). Those critical of the system grasped this in their public statements with vehemence, some of them pointing out that ‘if even the user interface was this faulty, what about the actual system?’ – yet, others critical of the system thought the ‘logic’ from faulty user interface to faulty system self evident, and failed to even mention the latter aspect. This was then, predictably, twisted by those in favour of the system as ‘being a minor problem with the user interface, which can be corrected easily enough’, thus leaving the actual underlying problems wholly unhandled.

Even though the Finnish eVoting experiment is for the moment on pause, and it is unclear whether it will be continued, it may continue without attempting to answer even those faults with the system which could be addressed, let alone trying to mitigate those problems with the system that cannot be solved.

REFERENCES

Fairweather, N. B. & Rogerson, S. (2002) Technical Options Report online at http://www.local-regions.odpm.gov.uk/egov/e-voting/pdf/tech-report.pdf, accessed Jan 21st 2003

Mercuri, R. (2001) Electronic Vote Tabulation: Checks and Balances PhD thesis,
University of Pennsylvania.

Valtiovarainministeriö (2000-2008), Valtion talousarvioesitykset [Finnish Ministry of Finance, Budget Presentations], http://193.208.71.163/indox/tae/index.html, accessed Dec 1st, 2008.

Other sources used

Various news papers (e.g. Helsingin Sanomat), public statements by government officials in television, public and private Internet forums (e.g. Effi mailing lists) etc.

Has the Indian Government Really Thought About Management of Information Systems Security?

AUTHOR
Shalini Kesar

ABSTRACT

Management of information security is important in any Electronic Government and particularly when confidential and sensitive information is recorded on a daily basis. The term ‘Electronic Government’ (EGov) refers to the use of Information and Communication Technologies (ICT) to improve delivery of government services, facilitate interactions with business and industry, or empower citizens through access to information. Efforts to offer such services to citizens have intensified across many countries. With this, threats such as computer crime both malicious and non-malicious have also increased in number. Consequently, the topic of management information security is both important and topical in view of the recent statistics reported on breaches of computer crime originating from both outside and within organizations. Although, it is argued that these ‘reported’ cases only represent the tip of a potentially large iceberg (CSI/FBI 2008 ).

For the last ten years, the Indian government has initiated various EGov projects both at national state and the local level. Ministry of Communication and Information Technology introduced National e-Governance Plan (NeGP) to support the growth of EGov within the country. Most recently, in 2008, the Indian Government implemented a Policy of Open Standards that aims to provide a set of guidelines for the uniform and reliable implementation of EGov. In the efforts to facilitate, promote advice and support the EGov initiatives at State and local level, the Computer Society of India (CSI) publishes various studies on challenges faced by the Indian government. Some of these challenges include: infrastructure; resistance to re-designing departmental processes; lack of communication between government departments and developers responsible for EGov (also see, Mahapatra and Sahu ).

Given that the India government has also initiated a major push towards offering its services through the Internet, it is clear that potential for information security breaches will also continue to increase. Many examples reflect India, like any other country already faces information security breaches. The Computer Crime & Abuse Report , for example, highlighted that over 6,266 incidents of computer crime cases affected 600 organizations in India during 2001 and 2002 alone . Reports such as Forensic Accounting Report (2007) , point out that given the fast developments in India, awareness level about computer crime is very low. To combat such threats, the Indian Government gave effect to a resolution of the General Assembly of the United Nations for adoption of a Model Law on Electronic Commerce. As a result, Information Technology Act 2000 was introduced to regulate and legalize electronic commerce. More recently, the Act was modified to include computer crime such as hacking. However, statistics indicate that very few people have been prosecuted under this Act. Furthermore, the Act has also been criticized for its complexity.

In explaining information security breaches, researchers provide alternative viewpoints. Taking into account the gravity and complex nature of ICT, one strand of studies argue that relying on technical solutions alone to secure any organizations from threats like computer crime is a very ‘narrow’ approach (for example, see Vroom and Solms 2004). Although, technical solutions are equally important, information security in general is much broader in perspective than “Computer Security”. It is for these reasons that information security researchers advocate the need to recognize both technical and social issues (For example, see Dhillon and Backhouse 2001, Siponen 2001, Kesar 2002). While trying to understand the factors that lead to the absence or poorly implemented solutions, researchers believe that it is also important to explore how management within organizations addresses the issue of information security. In this regard, it has been argued that one of the primary causes for the absence of the appropriate solutions is the complacency towards information security (Hinde 2001). As a result, complacency towards information security can be a major contributing factor for management of threats such as computer crimes. Hence, it can be argued that complacency towards information security combined with inadequate lack of and/or basic security controls could itself offer little scope for developing effective solutions.

Discussions so far, bring forth three fundamental issues regarding management of information security in the context of EGov. Firstly, most cases of computer crime for various reasons are rarely reported. Although, the extent of damage caused by information security breaches can be gauged by the ‘reported’ cases, as mentioned above, they represent only the tip of the iceberg (Parker 1998). To further compound the problem of computer crime, most acts do not catch the attention of organizations until it is too late. Secondly, there seems to be lack of studies that take into account government officials’ perceptions and views about information security. Thirdly, there is a general underestimation of the risks associated in an increasingly electronic and connected environment within government.

Against this backdrop, research question addressed in this paper is “How do government officials responsible for EGov projects perceive and interpret information security policies and procedures”? It makes specific reference to one EGov project implemented at a local level in India. While conducting the case study, it uses the design-reality gap analysis framework based on a multidimensional framework consisting of seven dimensions, namely; Information, Technology, Processes, Objectives and values, Staffing and skills, Management systems and structures, and Other resources (ITPOSMO ) proposed by Heeks (2000). While the prime focus of Heeks (2001; 2002) framework is on identifying gaps in the design and development of EGov projects, this paper uses the framework to address the research question stated above. In this paper, both primary and secondary data is used. Primary data involves semi-structured interviews (operational and managerial staff) and surveys. While secondary data includes documentations and newspaper articles.

To conclude, this paper contributes in providing significant learnings for EGov implementation in India in the context of management of information security. This can be beneficial to the efforts directed towards overcoming challenges and issues involved in securing EGov implementation from increasing threats such as computer crime.

REFERENCES

Dhillon, G., and Backhouse, J. (2001). “Current directions in IS security research: toward socio-organisational perspectives.” Information Systems Journal, 11 (2): 127-153.

Heeks, R. (1999). “Better Information Age Reform. Reducing the Risk of Information Systems Failure,” In Heeks, R. (ed.). Reinventing Government in the Information Age. International Practice in IT-enabled Public Sector Reform. London: Routledge.

Heeks, R. (2001). (ed.). “Reinventing Government in the Information Age: International Practice in IT-Enabled Public Sector Reform”. London: Routledge.

Heeks, R. (2002). “E-Government in Africa: Promise and Practice,” Information Polity (7), pp. 97-114.

Hinde, S. (2001). “The weakest link.” Computers & Security, 20 (4): 295- 301.

Kesar, S. (2002). Management of computer misuse committed by employees within organisations. MPhil Thesis (Information Systems). Leicester, De Montfort University: 351.

Parker, D. (1998). Fighting computer crime: a new framework for protecting information. New York, Wiley.

Siponen, M. T. (2001). An analysis of the recent IS security development approaches: descriptive and prescriptive implications. “Information security management: global challenges in the new millennium”. G. Dhillon. Ed. Hershey, Idea Group Publishing: 125-134.

Vroom, C., and Solms, R. V. (2004). “Towards information security behavioural compliance.” Computers & Security 23 (3): 191-198.

Computer Aided Ethical IT Systems Design

AUTHOR
Iordanis Kavathatzopoulos and Mikael Laaksoharju

ABSTRACT

Usability of IT systems, defined in terms of Human-Computer Interaction (HCI), is based on research conducted in many scientific disciplines as well as inside the frame of HCI. It is understood as knowledge on how to construct and use an IT system with respect to what is significant regarding human cognition, perception, ergonomics, group processes, organizational structures, etc.

HCI is the discipline that investigates how all this can be integrated into IT systems construction and use processes. It is also a discipline that focuses on the development of methods and tools that can be used by designers to produce knowledge on how to build usable IT systems. Guidelines, standards and recommendations cannot cover all problems or produce detailed answers to concrete design and use problems. They are general and they have to be interpreted and adapted.

Ethical aspects are important and we have to be able to consider them, too. All above remarks are also valid regarding the problem of ethical usability: Computer Ethics as a scientific discipline

  • produces knowledge that can help us in our effort to achieve ethical usability of IT systems. It can point to the significant issues and it can provide the main principles and ethical guidelines.
  • helps gathering relevant information, interpreting it and applying it in concrete design projects.
  • focuses also on development of methods and tools that produce detailed knowledge on how to design ethical IT systems, for example VSD, Paramedic, etc.

There is however a significant difference between normal usability issues and ethical usability. In ethics, ethical choice, ethical problem solving and decision making there is another dimension which is very important. Philosophy and psychology have clearly pointed to the ability of thinking in the right way, and how this can be developed, sustained and applied on moral problems (Kant, 2006; Piaget, 1932; Kohlberg, 1985).

Given now that in ethics no one can provide detailed and functioning answers, this dimension, i.e. ethical ability, is necessary to consider and to incorporate explicitly in an ethical usability process. And this is something that does not have the same value regarding other forms of IT systems usability. Simply it is not enough to have access to a body of usability knowledge or to methods producing such knowledge. In ethical usability we need also to provide support for the acquisition and use of ethical skills.

Accordingly our methods and tools stimulate the cognitive and group/organizational mechanisms of ethical competence in combination with their function of producing knowledge of how to design ethical IT systems. EthXpert, is a computerized tool based on these theoretical assumptions, (for more information see the web site of the tool http://www.it.uu.se/research/project/ethcomp/index.php?artikel=60&lang=en or see Laaksoharju & Kavathatzopoulos, in press).

The purpose with EthXpert is to help an analyst or decision maker to understand how different design solutions affect the interests of each involved stakeholder. To support this understanding, the analysis is made explicit by iterating a procedure comprising three main steps. The first step is to create an overview by drawing a stakeholder network, i.e. a map over the relations between all stakeholders. Second the impact of each stakeholder’s interests on other stakeholders are analyzed and noted. Finally the considerations for each interest are used as foundation for making assumptions about how the stakeholders are affected by different design solutions. Not only does this process help people to scrutinize, structure and get overview of an ethical problem. The resulting document can also be used as vindication of the choices that are made.

Various ethical support systems have targeted the concern of identifying relevant information in different ways. In Paramedic Ethics (Collins & Miller, 1992) focus is put on the obligations and responsibilities of the decision maker. Based on these, the user is establishing relationships between stakeholders and then identifying considerations for the different opportunities and vulnerabilities that come from alternative solutions. Finally a negotiated social contract alternative is evaluated as a possible compromise solution. In SoDIS (Gotterbarn, 2002; Szejko, 2002) the user is first gathering extensive background information about the problem and its stakeholders and is then prompted to answer questions aimed at identifying known causes for moral problems. In ETHOS (Mancherjee & Sodan, 2004) the user is advocated to identify the open moral questions at hand through taking the role of a moral agent after which the utility of alternative solutions are quantified according to ethical theories.

It should be noted that the first two of these systems are intended for computer professionals working in technical development projects while ETHOS, like EthXpert, is not targeting a specific audience and does not assume any specific content in the problem to be analyzed. This wide application scope makes it impossible to guide the user by asking questions about previously known sources for moral problems and other means to raise awareness of ethical issues need to be deployed. We in fact consider this absence of framework for issue identification as strength when it comes to widening the agenda for the problem situation.

Further, EthXpert’s omission of imposed comparison to ethical theory, as is the case in ETHOS, forces the user to make an independent decision about the correctness of the outcome. The user is thus never lured into the false comfort in believing that a premature analysis is finished. Following the definition of autonomy, the user has to independently decide when an analysis is finished. Such a setup enforces that the responsibility for a satisfactory analysis rests with the user. Our approach has many similarities to other ethical computerized tools suggested previously, or to other ethical usability tools such as Value Sensitive Design proposed by Friedman, Kahn and Borning (2008). However, those approaches do not focus exclusively on what psychological theory and research describe as the basis of competent ethical problem solving and decision making, namely the tension between heteronomous and autonomous moral reasoning (Kohlberg, 1985; Piaget, 1932). Following that what we need are tools that promote autonomy and hinder heteronomy. All above tools are excellent to systematize, organize and take control of designer’s thinking on concrete ethical usability issues. Nevertheless, since these tools, in different degrees, urge and lead the extension of thinking to moral philosophical considerations and other details there is a risk of being too complex and of missing the main goal, namely blocking heteronomous thinking. Focus should be on how to handle practical problems. Of course that may be also an effect of Paramedic, SoDIS, ETHOS or VSD but they include analysis of or comparison between different normative moral theories, or some others are even built to propose moral solutions (for example Davidrajuh, 2008). Ethical autonomy is not at focus there nor is it considered explicitly, meaning that the control of this necessary ethical problem-solving and decision-making process is not secured.

EthXpert has been applied on the design of different IT systems with very positive results. With help from EthXpert the test groups were able to extend previous analyses through identifying additional stakeholders and interests. The procedure also gave insight in how the interests of different stakeholders were interrelated. Some of the test groups especially appreciated the collaboration feature of EthXpert. An ethical analysis often brings up many big and small issues to consider and it is therefore efficient if a group can cooperate in solving the problems. Thus the tool also works as a means to gather several perspectives on a problem.

Through the explicit process, the designer acquires both a better overview of the complexity of a problem and a conception of how the involved stakeholders affect and are affected by different solutions. Almost all of the test subjects were of the opinion that the systematic procedure of EthXpert is purposeful for acquiring higher ethical problem-solving and decision-making skills by offering a holistic overview over ethical aspects in the design of IT systems. Although critical remarks about the usability of the interface, many also became aware of shortages in a prior analysis made without the tool. This indicates that a computerized tool that guides the investigation of stakeholders’ interests, and supports structuring and overview over information, is helpful for designing more ethical IT systems.

REFERENCES

Collins, W. R. and Miller, K. W.: 1992, ‘Paramedic ethics for computer professionals’. Journal of Systems Software 17, 23-38.

Davidrajuh, R.: 2008, ‘A computing system to assist business leaders in making ethical decisions’, in M. Oya, R. Uda and C. Yasunobu (eds.), Towards sustainable society on ubiquitous networks; Springer: Boston, 303-314.

Friedman, B., Kahn, P. H., Jr., & Borning, A.: 2008. ’Value Sensitive Design and information systems’, in K.E. Himma & H.T. Tavani (eds.), The Handbook of Information and Computer Ethics; John Wiley & Sons, Inc.: Hoboken, NJ, 69-101.

Gotterbarn, D. W.: 2002, ‘Reducing software failures: Addressing the ethical risks of the software development lifecycle’. Australian Journal of Information Systems 9(2), 155-165.

Kant, I. Grundläggning av sedernas metafysik; Daidalos: Stockholm, 2006.

Kohlberg. L.: 1985, ‘The just community: Approach to moral education in theory and practice’, in M. Berkowitz and F. Oser (eds.), Moral education: Theory and application; Lawrence Erlbaum Associates; Hillsdale, NJ.

Laaksoharju, M. and Kavathatzopoulos, I.: in press, ‘EthXpert: The basic structure and functionality of a decision support system in ethics. International Transactions in Operational Research.

Mancherjee, K. and A. Sodan,: 2004. ‘Can computer tools support ethical decision making?’. ACM SIGCAS Computers and Society, 34: 1.

Piaget, J. The moral judgement of the child; Routledge and Kegan Paul: London, 1932.

Szejko S.: 2002, ‘Incorporating ethics into the software process’, in I. Alvarez et. al (eds.), The transformation of organisations in the information age: Social and ethical implications, ETHICOMP 2002; Univeridade Lusiada: Lisbon, 271 – 279.

The Problem of Teaching Ethical Theory to Computing Undergraduates

AUTHOR
Suzy Jagger

ABSTRACT

Practitioners have identified that the teaching of ethical theory presents problems for students learning ethics as part of a professional degree programme. This is particularly the case where the course is compulsory rather than optional and is due to the student feeling they did not ‘sign up’ to learn these philosophical concepts. Thus, there is often a balancing act in judging how much theoretical content to put into a module and differing views on whether ethical theory should be taught at all. This study examines the extent to which students on a first year undergraduate computing course were able to understand and apply ethical theory to help them identify ethical issues, using a step-by-step model.

The study utilises empirical measurement from the perspective of moral development theory by incorporating three scoring methods. An ethical theory score, a moral sensitivity score and a moral judgment score. The ethical theory score was devised from coursework in which student understanding of three specific ethical theories was evaluated. The second score – the ethical sensitivity score – was devised using a tiered approach to analysing students’ identification of ethical issues in relation to a particular computing-related scenario. This measurement was an adaptation of a similar approach taken by Bebeau (1985)and Clarkeburn (2002). The third score, the moral judgment score was devised using a well-documented psychometric test, the Defining Issues Test, developed by James Rest (1999).

The scores were correlated against each other to identify the relationships between them with regard to ethical understanding and application, identification of ethical issues and moral judgment. The study highlights the inherent problems students encounter in understanding ethical theory but also shows a correlation between a high understanding of theoretical ethical concepts and ethical sensitivity and judgment. This suggests that, students who have mastered the theoretical concepts are more able to identify ethical issues and make moral judgments or the converse, that those with higher levels of moral judgment (as defined by the Defining Issues Test) and moral sensitivity are better able to understand theoretical concepts. The link suggests that although problematic, ethical theoretical concepts should be on the curricula but that there are issues in how to teach these theories effectively to this type of cohort.

1. Introduction

There is some published work exploring the link between professional ethics and moral development (Daniel et al., 1997 Loe et al., 2000 Robinson et al., 2000 Kavathatzopoulos, 1994) but less which evaluates specific teaching methods using multiple data sets. In the arena of computing ethics there are a limited number of studies which examine individual teaching methods in detail to determine what works and what does not which touch on moral development, but there is a clear need for further, more coordinated, research (Smolarski and Whitehead, 2000 Staehr and Byrne, 2003).

This research was designed to determine the effectiveness of a step-by-step moral decision-making exercise which incorporates ethical theory to help students develop moral decision-making skills. The study utilises three measurements. Two of the scores (moral sensitivity and moral judgment) were designed to measure two of four components represented in Rest’s Four Component Model of moral development (Rest, 1984). A third measurement, the ethical theory score, was devised within the study to measure student understanding of ethical theory.

Approaches to ethics teaching

The various classical approaches to ethics teaching centre on instruction in ethical theory. Theories such as; utilitarianism, deontology, social contract theory and virtue ethics are the most commonly referred to on computing courses and are briefly explained in most computing ethics text books. Ethical theory is often used by practitioners to aid students in evaluating dilemmas. Advocating ‘least harm’ and ‘greatest amount of good’ as key phrases in determining possible solutions provides an ethical benchmark from which to work. Presenting students with options from a philosophical perspective allows a level of guidance without being prescriptive.

The use of the ethical scenario is a common approach in ethics education to aid the development of moral reasoning skills through critical analysis and involves students analysing an ethical dilemma within the context of the profession. Baetz and Carson (among others) conclude in their research that, when approached sensitively and thoughtfully, dilemma analysis is a valuable tool which contributes to a positive learning experience (1999). A number of professional ethics text books provide an assortment of dilemmas for discussion and analysis.

There are many step by step models which are used both in business and education. An example is Wolcott and Lenk’s model (2003) in which students analyse an ethical dilemma through a series of four steps. The teaching method adopted on the course utilised this approach (with adaptations) and responses provided data for the ethical theory score used in the study.

2. Research Method

The paper provides in-depth explanation of the research methods used to obtain the first two scores which are correlated against each other. However, explanation concerning the third score, the moral judgment or P-Score is the topic of another paper (although the results are correlated against the scores in this paper as it is the same cohort and part of the overall project).

These scores were compared against each other to determine the level of correlation between understanding and applying ethical theory, identifying ethical issues and making moral judgments.

REFERENCES

BAETZ, M. & CARSON, A. (1999) ‘Ethical Dilemmas in Teaching About Ethical Dilemmas: Obstacle or Opportunity?’ Teaching Business Ethics, 3, 1 1-12

BEBEAU, M. J., REST, J. R. & YAMOOR, C. M. (1985) ‘Measuring dental students’ ethical sensitivity’, Journal of Dental Education, 49, 4 225-235

CLARKEBURN, H. (2002) ‘A Test for Ethical Sensitivity in Science’, Journal of Moral Education, 31, 4 439-453

DANIEL, L. G., ELLIOTT-HOWARD, F. E. & DUFRENE, D. D. (1997) ‘The Ethical Issues Rating Scale: An instrument for measuring ethical orientation of college students toward various business practices’, Educational and Psychological Measurement, 57, 515-526

KAVATHATZOPOULOS, I. (1994) ‘Training professional managers in decision-making about real life business ethics problems: The acquisition of the autonomous problem-solving skill’, Journal of Business Ethics, 13, 379-386

LOE, T. W., FERELL, L. & MANSFIELD, P. (2000) ‘A review of empirical studies assessing ethical decision-making in business’, Journal of Business Ethics, 25, 185-204

REST, J. (1984) ‘The Major Components of Morality’, in Kurtines, W. M. & Gewirtz, J. L. (Eds.) Morality, Moral Behavior, and Moral Development, New York John Wiley & Sons

REST, J., NARVAEZ, D., THOMA, S. & BEBEAU, M. J. (1999) ‘DIT2: Devising and Testing a Revised Instrument of Moral Judgment’, Journal Of Educational Psychology, 91, 4 644-659

ROBINSON, R., LEWICKI, R. J. & DONAHUE, E. M. (2000) ‘Extending and Testing a five factor model of ethical and unethical bargining tactics: Introducing the SINS scale’, Journal of Organisational Behaviour, 21, 649-664

SMOLARSKI, D. C. & WHITEHEAD, T. (2000) ‘Ethics in the Classroom: A reflection on Integrating Ethical Discussions in an Introductory Course in Computer Programming’, Science and Engineering Ethics, 6, 2 255-263

STAEHR, L. J. & BYRNE, G. J. (2003) ‘Using the defining issues test for evaluating computer ethics teaching’, Institute of Electrical and Electronics Engineers (IEEE) Transactions on Education, 46, 2 229-234

WOLCOTT, S. & LENK, M. (2003), Assessing Ethical Decision-Making, Steps for Better Thinking Conference, June, 2003

Measuring Moral Judgment in Computing Undergraduates Using the Defining Issues Test

AUTHOR
Suzy Jagger

ABSTRACT

This paper evaluates two studies which utilised the Defining Issues Test to measure the moral judgment capabilities of first year undergraduate students taking a computing ethics module as part of a BSc in Computing. The first study showed small gains in moral judgment due to significant improvement in the N2 score however the second study, which involved a larger sample and used an experimental model, failed to deliver a significant result. Reasons for the varied results are explored and conclusions suggest that, a key factor may have been the age mean of students which was lower for the second study. However the study also highlights the stark difference in mean scores between this cohort and the international average and questions certain aspects of the test’s validity with regard to its suitability for international audiences.

Introduction

There are a number of studies which utilise empirical measurement to evaluate moral competency (Daniel et al., 1997 Loe et al., 2000 Robinson et al., 2000) and most require some form of value judgment in their analysis. The Defining Issues Test is one such measurement which analyses moral judgment capabilities using a ‘post-conventional’ approach to moral-decision making. Staehr and Byrne (2003) administered the test to a group of final year computer science undergraduates and concluded,

There is plenty of scope for study in a wide variety of aspects of moral development in the computing and engineering professions. This research might include investigations to develop the best teaching and learning methods for intervention programs to enhance moral judgment in both students and practitioners, using the DIT to evaluate the success of professional ethics programmes or practising professionals (p.233).

Smolarski and Whitehead described approaches to introducing students to the study of computer ethical issues and felt the question of personal ethics was an important consideration, ‘it is not an unreasonable question to ask how much a student’s attitudes have changed as a result of including such ethical material in a technical course’ (2000:260). They suggested the administration of the Defining Issues Test as a possible approach to answering this question.
Measuring Moral Judgment

The idea of measuring moral development conjures up immediate difficulties. How do you measure something that is affected by a host of non-developmental factors? As well as this, ideally research designed to measure educational interventions with regard to moral development should measure all the processes involved in development – not just moral judgment – and how they impact on one another.

Kavathatzopoulos contends that tests which evaluate using any kind of moral philosophical framework can mislead an assessment and considers a more valid approach is to remove any moral philosophical principles from the analytical process. In his measurement model, he focuses primarily on the cognitive process involved in moral decision-making as a method for analysis. However he acknowledges that ‘cognitively higher ethical reasoning does not necessarily lead to “better” morality because there is no moral principle in the model to define what is good and what is bad’ (1994:58).

The Defining Issues Test analyses cognitive processes in its evaluation of moral judgment. The founder, James Rest contends that depending on a person’s level of moral judgment they will ‘interpret moral dilemmas differently, define the critical issues of the dilemma differently, and have different intuitions about what is right and fair in a situation’ (1986:196). The question is whether the test evaluates what is ‘fair’ and ‘right’ as part of its assessment method and if so, whether this is an acceptable method of measurement. The test is a by-product of Kohlberg’s Moral Judgment Interviews in which there is an assumption of moral universalism.

Results from the two cohorts suggest that age may have been a factor in the differing N2 results. The low mean scores in comparison against international averages are also discussed. Reasons for this are explored in the context of cultural differences and the notion of moral universalism. The paper questions the premise of post-conventional thinking as a basis for measurement and also considers a number of other issues which may have had an impact on test validity.

REFERENCES

BEBEAU, M. & THOMA, S. (2003) ‘Guide for DIT-2’, Center for the Study of Ethical Development,Minneapolis

DANIEL, L. G., ELLIOTT-HOWARD, F. E. & DUFRENE, D. D. (1997) ‘The Ethical Issues Rating Scale: An instrument for measuring ethical orientation of college students toward various business practices’, Educational and Psychological Measurement, 57, 515-526

KAVATHATZOPOULOS, I. (1994) ‘Training professional managers in decision-making about real life business ethics problems: The acquisition of the autonomous problem-solving skill’, Journal of Business Ethics, 13, 379-386

LOE, T. W., FERELL, L. & MANSFIELD, P. (2000) ‘A review of empirical studies assessing ethical decision-making in business’, Journal of Business Ethics, 25, 185-204

NARVAEZ, D. (1998) ‘The influence of moral schemas on the reconstruction of moral narratives in eighth graders and college students’, Journal Of Educational Psychology, 90, 13-24

REST, J. (1986) The Psychology of Morality, London, Praeger

ROBINSON, R., LEWICKI, R. J. & DONAHUE, E. M. (2000) ‘Extending and Testing a five factor model of ethical and unethical bargining tactics: Introducing the SINS scale’, Journal of Organisational Behaviour, 21, 649-664

SCHLAEFLI, A., REST, J. R. & THOMA, S. J. (1985) ‘Does Moral Education Improve Moral Judgement? A Meta-Analysis of Intervention Studies Using the Defining Issues Test’, Review of Educational Research, 55, 319-352

SMOLARSKI, D. C. & WHITEHEAD, T. (2000) ‘Ethics in the Classroom: A reflection on Integrating Ethical Discussions in an Introductory Course in Computer Programming’, Science and Engineering Ethics, 6, 2 255-263

STAEHR, L. J. & BYRNE, G. J. (2003) ‘Using the defining issues test for evaluating computer ethics teaching’, Institute of Electrical and Electronics Engineers (IEEE) Transactions on Education, 46, 2 229-234