AUTHOR
Suzy Jagger
ABSTRACT
This paper evaluates two studies which utilised the Defining Issues Test to measure the moral judgment capabilities of first year undergraduate students taking a computing ethics module as part of a BSc in Computing. The first study showed small gains in moral judgment due to significant improvement in the N2 score however the second study, which involved a larger sample and used an experimental model, failed to deliver a significant result. Reasons for the varied results are explored and conclusions suggest that, a key factor may have been the age mean of students which was lower for the second study. However the study also highlights the stark difference in mean scores between this cohort and the international average and questions certain aspects of the test’s validity with regard to its suitability for international audiences.
Introduction
There are a number of studies which utilise empirical measurement to evaluate moral competency (Daniel et al., 1997 Loe et al., 2000 Robinson et al., 2000) and most require some form of value judgment in their analysis. The Defining Issues Test is one such measurement which analyses moral judgment capabilities using a ‘post-conventional’ approach to moral-decision making. Staehr and Byrne (2003) administered the test to a group of final year computer science undergraduates and concluded,
There is plenty of scope for study in a wide variety of aspects of moral development in the computing and engineering professions. This research might include investigations to develop the best teaching and learning methods for intervention programs to enhance moral judgment in both students and practitioners, using the DIT to evaluate the success of professional ethics programmes or practising professionals (p.233).
Smolarski and Whitehead described approaches to introducing students to the study of computer ethical issues and felt the question of personal ethics was an important consideration, ‘it is not an unreasonable question to ask how much a student’s attitudes have changed as a result of including such ethical material in a technical course’ (2000:260). They suggested the administration of the Defining Issues Test as a possible approach to answering this question.
Measuring Moral Judgment
The idea of measuring moral development conjures up immediate difficulties. How do you measure something that is affected by a host of non-developmental factors? As well as this, ideally research designed to measure educational interventions with regard to moral development should measure all the processes involved in development – not just moral judgment – and how they impact on one another.
Kavathatzopoulos contends that tests which evaluate using any kind of moral philosophical framework can mislead an assessment and considers a more valid approach is to remove any moral philosophical principles from the analytical process. In his measurement model, he focuses primarily on the cognitive process involved in moral decision-making as a method for analysis. However he acknowledges that ‘cognitively higher ethical reasoning does not necessarily lead to “better” morality because there is no moral principle in the model to define what is good and what is bad’ (1994:58).
The Defining Issues Test analyses cognitive processes in its evaluation of moral judgment. The founder, James Rest contends that depending on a person’s level of moral judgment they will ‘interpret moral dilemmas differently, define the critical issues of the dilemma differently, and have different intuitions about what is right and fair in a situation’ (1986:196). The question is whether the test evaluates what is ‘fair’ and ‘right’ as part of its assessment method and if so, whether this is an acceptable method of measurement. The test is a by-product of Kohlberg’s Moral Judgment Interviews in which there is an assumption of moral universalism.
Results from the two cohorts suggest that age may have been a factor in the differing N2 results. The low mean scores in comparison against international averages are also discussed. Reasons for this are explored in the context of cultural differences and the notion of moral universalism. The paper questions the premise of post-conventional thinking as a basis for measurement and also considers a number of other issues which may have had an impact on test validity.
REFERENCES
BEBEAU, M. & THOMA, S. (2003) ‘Guide for DIT-2’, Center for the Study of Ethical Development,Minneapolis
DANIEL, L. G., ELLIOTT-HOWARD, F. E. & DUFRENE, D. D. (1997) ‘The Ethical Issues Rating Scale: An instrument for measuring ethical orientation of college students toward various business practices’, Educational and Psychological Measurement, 57, 515-526
KAVATHATZOPOULOS, I. (1994) ‘Training professional managers in decision-making about real life business ethics problems: The acquisition of the autonomous problem-solving skill’, Journal of Business Ethics, 13, 379-386
LOE, T. W., FERELL, L. & MANSFIELD, P. (2000) ‘A review of empirical studies assessing ethical decision-making in business’, Journal of Business Ethics, 25, 185-204
NARVAEZ, D. (1998) ‘The influence of moral schemas on the reconstruction of moral narratives in eighth graders and college students’, Journal Of Educational Psychology, 90, 13-24
REST, J. (1986) The Psychology of Morality, London, Praeger
ROBINSON, R., LEWICKI, R. J. & DONAHUE, E. M. (2000) ‘Extending and Testing a five factor model of ethical and unethical bargining tactics: Introducing the SINS scale’, Journal of Organisational Behaviour, 21, 649-664
SCHLAEFLI, A., REST, J. R. & THOMA, S. J. (1985) ‘Does Moral Education Improve Moral Judgement? A Meta-Analysis of Intervention Studies Using the Defining Issues Test’, Review of Educational Research, 55, 319-352
SMOLARSKI, D. C. & WHITEHEAD, T. (2000) ‘Ethics in the Classroom: A reflection on Integrating Ethical Discussions in an Introductory Course in Computer Programming’, Science and Engineering Ethics, 6, 2 255-263
STAEHR, L. J. & BYRNE, G. J. (2003) ‘Using the defining issues test for evaluating computer ethics teaching’, Institute of Electrical and Electronics Engineers (IEEE) Transactions on Education, 46, 2 229-234