History Repeats Itself: The case of Computer Fraud within French Bank Société Générale

AUTHOR
Shalini Kesar

ABSTRACT

The problem of computer crime continues to increase across the world (CSI/FBI 2007). Audit Commission Report (2001) broadly categorized computer crime into Fraud; Theft; Use of illicit software; Invasions of privacy; Hacking; Sabotage and Virus. The Report defined computer fraud, as an unauthorized input, or alteration of input; destruction/suppressing/misappropriation of output from a computer process; alteration of computerized data; alteration or misuse of programs, but excluding virus infection. In other words, it is a deliberate misappropriation by which an offender tries to gain unauthorized access to the organization’s information systems. The misappropriation itself may be opportunistic, pressured, or a single-minded, calculated plan, which can vary from simple acts to serious crimes. Indeed, the above assumption of computer fraud is broad in its scope. Hence, for the purpose of this paper, fraud is further classified into three types: Input fraud; Throughput fraud and Output fraud (Backhouse and Dhillon 1995).

The extent of the damages caused by computer fraud acts can be gauged from various reports and surveys (Audit Commission 2005; DTI 2006). Recent Computer Security Institute Survey (CSI/FBI 2007) stated that the average annual loss reported in this year’s survey increased to $350,424 from $168,000 the previous year. Further, almost one-fifth (18 percent) of those respondents who suffered one or more kinds of security incident. A most recent case of computer fraud, one of the biggest trading frauds in history was committed by a singles futures trader, 31-year-old Frenchman Kerviel within France’s second-largest bank Société Générale . Although the illicit trading claimed to be simple in nature but apparently it was concealed by “sophisticated and varied techniques”ii that involved circumventing bank’s multilayered security systems for over a yeariii . Consequences of the fraudulent activities resulted in losses mounting as high as $7.14 billions. Kerviel is been charged with: fraudulent falsification of banking records, use of such records and computer fraudiv . Although the monetary loss uncovered within Société Généralei is very large, illicit trading involving computer fraud is indeed not new. Almost thirteen years ago, a similar incidence happened within Baring Bank, where a single rouge trader, Leeson committed computer fraud to engage in illicit trading that resulted in a loss of £827 million. Initial media reports on Barings Bank seem to focus only on Leeson’s illicit trading and ‘blamed’ him for the collapse of the bank. Later, Bank of England Report (1995) revealed that Leeson’s computer fraud took place during the change of flux that involved a combination of ambitious internal reconstructing, integration of the bank and broking operations. Consequently, Leeson was able to commit computer fraud to engage in illicit trading that escaped the management for almost three years, till he was caught.

Against this backdrop, this paper argues failure to examine wider structural issues within the organizations and focusing only on the offenders can leave Information Systems (IS) vulnerable. Therefore, it is significant that organizations understand the very complex nature of computer fraud and the changing environment of organizations today that leads to disregard or inadequacies for basic IS security controls (Audit Commission 2005). With some exceptions, most traditional studies on managing computer fraud adopt a functionalistic viewpoint thus failing to recognize that ‘suitable opportunities’ within organizations arise in and as a consequence of daily activities within the working environment of an organization. This is because management mostly relies on technical solutions while trying to combat computer fraud. Given that Organizations today, do not follow a strict hierarchical structure, therefore, just relying on technical solutions organizations seems an inapt approach (Parker 1998; von Solms 2001).From this viewpoint, IS security researchers emphasis the importance of ‘human factors’ when dealing with management of computer fraud (for example, see Hitchings 1995, 1996; Dhillon 1997; Kesar and Rogerson 1998; Dhillon and Backhouse 2001; Siponen 2001; Stanton et al. 2005; Kesar 2005).

In criminology studies, researchers such as Clarke and Cornish (2000) assert that offenders are influenced by three main groups of variables: Background Factors; Current Life Circumstances; Situational Variables. This assertion provides a useful insight while understanding the underlying reasons for the occurrence of computer fraud. Keeping this in mind, this paper looks at two cases that involved a single ‘trusted’ trader circumventing security breaches to engage in computer fraud (input). Ironically, both the cases are similar yet occurred 13 years apart. The first case, Barings Bank occurred in 1995, whereas the second case occurred this year at the Société Général bank. It takes the support of the Crime Specific Opportunity Structure (CSOS) model by Willison (2000a, 2000b, 2002) to demonstrate how working environment within each bank may have provided ‘suitable opportunities’ to the offenders to engage in computer fraud. The CSOS model originates from a new school of thought, Situational Crime Prevention (SCP), which incorporates dispositional variables of traditional criminology (Clarke 1997). The conceptual model, CSOS, demonstrates interactions between the degree of guardianship, the targets, offender and facilitators, which warrant a viable opportunity in terms of perceived risks, effort and rewards. Using CSOS as a theoretical framework for specific crimes will enhance our understanding of a various factors that underpin the causes of such intentional illicit acts within organizations. Moreover, the increasing sophistication in both technology and IS users in today’s networked organization makes it vital that organizations understand the underlying reasons for the occurrence of the problem of computer fraud, particularly those committed by ‘trusted’ employees. This is because a flawed understanding about IS systems security will afford little scope for developing effective solutions for managing threats like computer fraud committed by employees. This paper significantly contributes to the limited existing research within IS security by diverting from the ‘narrow and technical perspective’ traditionally taken in this context. Hence, it takes into account wider organizations structure issues rather than just focus on the offender alone to understand computer fraud from a criminological perspective.

REFERENCES

Audit Commission (2001). Your business@ risk: an update of IT abuse 2001, London, Audit Commission Publications, HMSO.

Audit Commission (2005). London, Audit Commission Publications, HMSO.

Backhouse, J. and Dhillon, G. (1995) “Managing computer crime: a research outlook.” Computers & Security 14 (7): 645-651.

Bank of England Report (1995). Report of the Board of Banking Supervision: Inquiry into the circumstances of the collapse of Barings, London, HMSO.

Clarke, R., Ed. (1997). Situational crime prevention: successful case studies. Albany, NY, Harrow and Heston.

Clarke, R., and Cornish, D. (2000). Rational choice. Explaining crime and criminals: essay in contemporary criminological theory. R. Paternoster and R. Bachman. Los Angeles, CA, Roxbury Publishing Company: 23-42.

CSI/FBI (2007). Crime and Computer Survey. San Francisco, CSI.

Dhillon, G., and Backhouse, J. (2001). “Current directions in IS security research: toward socio-organisational perspectives.” Information Systems Journal 11 (2): 127-153.

D.T.I. (2006), Information Security Breaches Survey 2006, Information Security Breaches Survey, Cooper & Lybrand, Department of Trade and Industry, London. www.security-survey.gov.uk.

Hitchings, J. (1995). “Deficiencies of the traditional approach to information security and the requirement for a new methodology.” Computers & Security 14 (5): 377- 383.

Hitchings, J. (1996). A practical solution to the complex human issues of information security design. Information systems security: facing the information society of the 21st century. K. S. Katiskas and D. Gritzalllis. London, Chapman & Hall: 3-12.

Kesar, S., and Rogerson, S. (1998). Attitudinal and normative components in information misuse: the case of Barings Bank. Effective utilization and management of emerging information technologies. M. Khosrowpour, Idea Group Publishing: 60-67.

Kesar, S. (2005). Interpreting Computer Fraud Committed by Employees. Ph.D. Thesis (Information Systems). Informatics Research Institute (IRIS) University of Salford, Salford, UK: 311

Parker, D. (1998). Fighting computer crime: a new framework for protecting information. New York, Wiley.

Siponen, M. T. (2001). “On the role of human morality in information systems security.” Information resources Management Journal 14(4): 15-23.

Stanton, J. M., Stam, R.K., Mastrangelo, P., and Jolton, J. (2005). “Analysis of end user security behaviors.” Computers & Security 24 (2): 124-133.

von Solms, B. (2001). “Corporate governance and information security.” Computers and Security 20 (3): 215- 218.

Willison, R. (2000a). “Understanding and addressing criminal opportunity: the application of situational crime prevention to IS security.” Journal of Financial Crime 7 (3): 201-210.

Willison, R. (2000b). Reducing computer fraud through situational crime prevention. Information security for global information infrastructures. S. Qing and J. H. P. Eloff. Eds. Boston, Kluwer Academic Press: 99-109.

Willison, R. (2002). Opportunities for computer abuse: assessing a crime specific approach in the case of Barings Bank. PhD Thesis (Information Systems). London, London School of Economics.

ENDNOTES

i Source: http://www.businessweek.com/ap/financialnews/D8UCCE4O1.htm (date of access January 23, 2008).

ii Source: http://news.bbc.co.uk/1/hi/business/7206270.stm (date of access January 25, 2008).

iii Source: http://news.bbc.co.uk/1/hi/business/7206270.stm (fate of access January 22, 2008).

iv Source: http://www.guardian.co.uk/business/2008/jan/24/creditcrunch.banking/ (Date of access January 24, 2008). Also see http://www.forbes.com/feeds/ap/2008/02/06/ap4623008.html

Ethical Assessment of Future-Oriented Design Scenarios

AUTHOR
Veikko Ikonen and Eija Kaasinen

ABSTRACT

This paper shares experiences of ethical assessment of future-oriented design scenarios. Scenario-Based Design has been implemented widely to the concept and product development processes. Especially in the development of Information and Communication Technologies the Scenario-Based Design approach has been utilized though with different variations and modifications. Ethical issues have been tackled usually in some level in our scenario assessments. Especially in user evaluations of scenarios the ethics of design has been an important issue to discuss and research. Lately questions concerning the ethics of ambient intelligence have also strongly arisen. Ethical challenges of mobile-centric ambient intelligence are multifaceted. The technology should be safe and secure as such, the applications should be safe and secure, and human values such as privacy, self-control, trust etc. should not be violated by the technology or the applications. These ethical issues are frequently raised as important factors in user requirement definition process. However it seems that we have no validated procedure for ethical assessment in early concept development phase of product development process. Our aim is to meet this challenge by developing a coherent approach and methodology for ethical assessment of design scenarios. In this paper we give examples of our previous project where ethical issues have been considered as a design requirement and focus on particularly to the ongoing Minami -project where we have introduced ethical guidelines as a design instrument.

In Praise of Moral Persuasion

AUTHOR
Chuck Huff

ABSTRACT

What is it that we teach when we convene courses in computer ethics? Documents like the Hastings Center Report (Callahan, 1980) encourage us to teach ethical reasoning skills, and we certainly do so. More focused curriculum guides (Huff & Martin, 1995) also present a host of intermediate-level knowledge (e.g. privacy, software safety, intellectual property) that we teach. But we also teach a hidden curriculum (Meighan, 1986) about “an approach to living and an attitude to learning.” Simply by offering (or perhaps requiring) the computer ethics class we say something about what our department values.

Having done this, though, we are often curiously silent about these values in the class itself, preferring the more prosaic skills and knowledge curriculum. I propose here to praise the hidden curriculum of moral persuasion, to make a few steps towards understanding that curriculum, and by praising it to make that curriculum more respectable.

First, I would like to introduce two concepts that might help us better understand the role of moral persuasion in our teaching of professional ethics. They both come from recent work I have been doing in tracking the moral careers of what we call moral exemplars in computing (Huff & Rogerson, 2005; Huff, 2008). These are people who are well known for their influential commitment to good computing, in both senses of the term.

The first concept is that of moral career itself. What we have been finding is that, far from being uniform, our moral exemplars in computing exhibit a striking variety of ways they integrate their moral action in their professional work. Some have centered their careers on using their skills to design tools that help the handicapped, or that make personal data in commercial transactions safe. Others have worked tirelessly to reform the field of computing, supporting the careers of women and other minorities in the field, or agitating for changes in privacy or intellectual property laws. None started out their career saying to themselves “I will become a moral exemplar by acting in such and such a way,” but all regularly integrated moral and ethical concerns into their daily work. And by doing so, they shaped a moral career, a career with a trajectory influenced by their moral commitments.

The second concept is that of moral ecology. Part of what I mean by moral ecology is the somewhat stable, but constantly negotiated set of values that are agreed upon in a profession. But another part is the highly variable local ethical climates over which we have varying degrees of control. Our exemplars charted their moral careers through widely varying moral ecologies at national, organizational, and work group levels. Some used the agreed upon values of the profession (e.g. safety, user-centered design) to guide and motivate their work. Others attempted to influence the values of their profession, or of their nation, by calling attention to values (like gender inclusion) that might not initially have been thought of as central to the profession. All of them attempted to construct local moral ecologies that would help them pursue their projects.

Even our teaching is embedded in a moral ecology that is influenced by our students’ goals, our goals, the values and procedures of our departments and institutions, and the values of our profession. By teaching we are acting in a moral ecology and influencing that ecology. To opt out, and only teach “skills and knowledge,” is in fact to teach a hidden curriculum of values or to leave the values curriculum up to others.

As instructors, we can choose to influence the moral ecology of the classroom, to use moral persuasion in our teaching in a way that is self-reflective and organized. Here is a concrete example. Most of us spend some classroom time teaching codes of ethics. If we view these codes as expressions of the moral ecology of the professions, we will encourage students to be active participants in dialogue with the codes, agreeing with them, querying them, critiquing them, finding some pieces more relevant to their work than others. In this dialogue with respected members of the profession (represented by the code) they appropriate the commitments of the field into their moral career, into their sense of themselves as a professional. We can say much the same thing about work with cases or work with any of the projects students encounter across their curriculum.

Thus, a central goal of the hidden curriculum should be to help students discover the values of the profession and to integrate those values into their sense of themselves as professionals. Though I refer to “moral persuasion,” it should be clear that our students are not mere passive recipients of influence, but are actively constructing moral careers for themselves. Psychology long ago gave up on the idea that we are passive agents of social influence by “the society” or even “the teacher.” We actively seek meaning in our lives, we decide to adopt or discard the influence attempts of others, and we look for guidance from others for appropriate behavior and values. Whether they are self-aware or not, students are constructing moral careers, and are taking cues from their moral ecology (which includes us as teachers) about how to construct those careers.

Though the skills and knowledge we teach are necessary for ethical action (Blasi, 1980; Keefer & Ashley, 2001), they are not sufficient. It is the sense of self that motivates moral action (Blasi, 1980), or sense of the professional self that drives the ethical commitments and moral careers of computer professionals (Colby & Damon, 1992; Huff, 2008).

We need to think systematically about how we influence this sense of self, which values we want to teach, how to teach them, and how to measure their influence on our students’ moral careers. I think we can do this, and we can do it without abusing our power. Indeed, to not do it might itself be an abuse of power and an abdication of our responsibility in the moral ecology.

REFERENCES

Colby, A. & Damon, W. (1992). Some do care: Contemporary lives of moral commitment. New York, NY: Free Press.

Keefer, M.W., & Ashley, K.D. (2001). Case-based Approaches to Professional Ethics: A systematic comparison of students’ and ethicists’ moral reasoning. The Journal of Moral Education Vol. 30, (4) 377-398.

Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88, 1-45.

Callahan, D. (1980) Goals in the Teaching of Ethics, in: Callahan, D. & Bok, S. (eds) Teaching Ethics in Higher Education. Plenum, New York, pp. 61-74

Huff, C. W. (February, 2008). Good Computing: A Pedagogically Focused Model of Virtue in the Practice of Computing. Target paper for panel at the Association for Practical and Professional Ethics, San Antonio, TX.

Huff, C. W., & Martin, C. D. (1995). Computing Consequences: A framework for teaching ethical computing. Communications of the Association for Computing Machinery. 38(12), 75-84.

Huff, C. W. & Rogerson, S. (September, 2005). Craft and reform in moral exemplars in computing. Paper presented at ETHICOMP2005 in Linköping, Sweden.

Meighan, R. (1986). A sociology of educating. New York: Saunders College Publishers.

Privacy Enhancing Technologies: An Empirical Study into their Adoption and usage in UK Organisations

AUTHOR
Richard Howley and Gilesh Pattni

ABSTRACT

Privacy Enhancing Technologies (PETs) have been widely promoted as offering technological safeguards to data privacy and security for more than a decade. Indeed, the development of a European system for data protection that resulted from the 1995 European Directive, and the UK 1998 Data Protection Act were predicated on a presumption that PETs would feature prominently in providing privacy enhancements. Given this background it is somewhat surprising that the UK Information Commissioner recently announced that since the security breach at HM Revenue and Customs in November 2007, almost 100 data breaches by public, private and third sector organisations have been reported. Some of the most notable of these occurred in financial institutions, government agencies and UK National Health Service organisations and involve the loss of unencrypted laptops, computer disks and memory sticks. Several high profile losses have occurred whilst unencrypted data has been lost in transit from one location to another; precisely the circumstances that PETs were expected to protect us from! Clearly, there are UK organisations that are either not using PETs at all or not using them effectively to protect our data. Given the proposition that PETs would play a significant role in protecting data, the authors of this paper were somewhat surprised to witness significant data breaches occurring with what appeared to alarming regularity. Notwithstanding, the methodological impurities about the use of the term ‘regularity’, it was noted that a large number of data breaches were being reported to the British public and that many of these breaches appeared to be of a type that the application of a basic set of PETs would secure against. This is the context that gave rise to the research reported in this paper.

This research reviews the literature that exists in the area of PETs, focusing on their types, uses and levels of adoption. The findings of the literature review are reported and show the ways in which a variety of PETs contribute to data privacy and security, where and when they can be used, and the organisational context of PETs. Significantly, literature was not found on current levels of PET adoption and it was this omission that led to the development of a research instrument to further explore the nature and extent of PET usage and adoption in the UK.

The research instrument, a questionnaire, was designed and piloted before being used with respondent groups. The questionnaire sought insights into three related aspects of PET usage:

  • Uses of PETs, including a consideration of which PETs are used and why.
  • The privacy context of PETs, focusing on policies and procedures.
  • Evaluation of PETs as providing effective privacy protection.

The analysis of the data in each of these categories is supported by an analysis of the overall respondent profile of those contributing to this research.

The main findings emerging include:

  • The respondent group are mainly IT Managers in large organisations, and interestingly, very few replies came from government agencies or financial institutions.
  • The majority of organisations taking part in this research reported their use of PETs and a profile is offered showing which PETs are used and the degree of their adoption.
  • There is uncertainty as to what PETs actually are and what their benefits are. Representatives from other organisations, however, were able to list the benefits they feel that PETs provide. These are presented and reviewed.
  • The obstacles to PET adoption are identified and explored. Clearly, this is an important issue if wider adoption and usage is seen as desirable.
  • Amongst those organisations that use PETs, their application to business processes and or data is not uniformly applied. The implications of this apparent ‘optionality’ are considered in more detail and reported in the full paper.
  • The usage of removable storage devices is frequently regulated and controlled and these controls are identified and their effectiveness explored.
  • The range of protection procedures applied to laptops being taken off-site are identified and evaluated with regard to any tensions that may exist between business process efficiency and data privacy and protection.
  • Procedures for updating PETs and or the adoption of new PETs are reported.

This research is a timely contribution to the body of privacy literature. It also serves as a basis for assessing the contribution of PETs in safeguarding our data privacy and security. In a world in which citizens are increasingly processed and or identified by digital representations the technology that is identified as being a key provider of privacy has to be known, understood and applied. This research concludes that much more needs to be done before UK organisations can fully benefit from PETs and before their data subjects can rest assured that they are fully protected.

The Person as Risk, The Person at Risk

AUTHOR
Jeroen van den Hoven and Noëmi Manders-Huits

ABSTRACT

The use of computer supported modeling techniques, computerized databases and statistical methods in fields such as law enforcement, forensic science, policing, taxation, preventive medicine, insurance, and marketing greatly promotes the construal of persons “as risks”.

In the “persons as risk” discourse, persons are characterized in terms of probabilities: probabilities that they will commit crimes (security), that they will like commercial products (marketing), are prone to accidents (safety), are likely to exhibit certain types of unhealthy behavior (preventive medicine), constitute moral hazards for insurance companies (insurance).

In the first part of our paper we present the historical background to this view by discussing two results of the work of Ian Hacking. First, a thesis of historical ontology. Hacking has argued that “people can and have been made up”, sorted and stereotyped– e.g. the homosexual, the criminal, the repeat offender, the bad credit risk, – on the basis of historically rooted classifications and concepts. Second, Hacking and others have extensively documented the emergence and prevalence of thinking in terms of probabilities about a broad range of phenomena in the last two centuries.

These two developments in conjunction, we argue, have given rise to a view of human beings which tends to conceive of them in terms of classifications, statistical categories, profiles and probabilistic models. In the field of identity management and profiling, identities and persons are construed as dynamic collections of personal data. Individuals are routinely and increasingly treated on the basis of probabilistic representations. The treatment they receive, the things they are entitled to, their rights, accountabilities, and the opportunities they are given as well as the limitations that are imposed upon them are shaped by the way their identities are construed and used.

In the second part of the paper we argue that the statistical and probabilistic construal of persons is fundamentally incomplete so as to give rise to questions about the moral justification and the limits of its use and application. First of all they do not accommodate conceptions of the moral person as acting on moral reasons and secondly they fail to accommodate a person’s self-presentations.

We confront the IT induced shift to a view of persons “as risks” with the idea –following Bernard Williams – we have termed ‘moral identification’. Persons need to be able to ‘morally identify’ to some extent with the ways in which they are represented by others and they legitimately desire to be identified by others as such, i.e. as identifying themselves with identity ideals in particular ways. Persons have aspirations, higher order evaluations and atti¬tudes and they see the things they do in a certain light. Representations of these aspects of persons are missing when they are repre¬sented as statistical elements, liabilities and risks in data bases and computer models.

In order to examine the moral problems raised by conceiving of persons as risks, the technologies that enable and support the management and creation of representations, need to be understood in two different ways. On the one hand, they are used for the descriptive mapping of (user) identities and matching characteristics, on the basis of which profiles are created. These characterizations are extremely useful for scientific purposes and support epidemiology, demographic and social science research. However, on the other hand these characterizations are also used for practical purposes, as a grid of categorical profiles, in which data subjects are classified accordingly and constrained in particular ways. Identities or representations of persons are fossilized – carved in stone – as it were, and may lead to erroneous and morally objectionable classifications in marketing, welfare, criminal justice, preventive medicine, insurance and finance and may lead to imposing unjustified constraints on actions and agency of persons.

We provide several suggestions for value sensitive design of profiling technology to accommodate these problems.

Googling the Future: The Singularity of Ray Kurzweil

AUTHOR
David Sanford Horner

ABSTRACT

Stories of social, cultural and economic futures underwritten by the latest advances in technology are a familiar trope (Seidensticker, 2006; Selin, 2007). In this paper I will extend my recent work in which I have argued that the only rational response to the claims of ‘futurism’ should be one of profound scepticism (Horner 2005; Horner 2007a; Horner 2007b). It might be said that some claims are better then others, for example, a populist futurism may be easily brushed aside but more serious, evidentially based work surely must be taken more seriously. However, what I hope to show is that the problems that beset forecasting are not simply matters of inadequate technique and poor evidence but that the enterprise is conceptually and logically flawed and this, itself, has important ethical implications. This seems to be a miserable conclusion given that foresight has been presented as a principal means by which we might deal with the threats of uncertain futures. However, to illustrate this argument I analyse in some detail Ray Kurzweil’s The Singularity is near: when humans transcend biology (2005). The book is striking in the reach and depth of its projections (taking us well beyond Web 2.0!) in envisaging a future in which information technologies have developed exponentially to create the conditions for humanity to transcend its biological limitations. Kurzweil describes this as ‘the Singularity’: “…It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself” (Kurzweil, 2005, p.7). He makes the remarkable claim that: “There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality” (Kurzweil, 2005, p.9). It is important to note that Kurzweil says that this ‘will’ happen; there is not even a cautionary ‘may’ happen. The book is suffused with that sense of historical inevitability famously criticized by Isaiah Berlin (1954) the ethical implication of which is a profound loss of human freedom. What underlies this vision is an idea with a long pedigree that history conforms to natural or (supernatural) laws which in themselves constitute the basis for knowledge about the future states of the world. Kurzweil in The Singularity is Near indeed presents ‘a theory of technological evolution’ as justification of the shape of future human society. In the manner of Karl Marx or Herbert Spencer he rejects a so called linear view of historical development in favour of a vast historical canvas of six historical epochs that are driven, in a law-like manner (‘the law of accelerating returns’), by the exponential growth of information and technology. I argue that this view is flawed in at least two fundamental respects. Firstly in its disregard of the notion of limiting factors which apply even in the case of the growth of science and technology (Barrow, 1999; Edgerton, 2006). And secondly, it mistakes phenomena that may be temporary, local and limited for a metaphysical principle (Seidensticker, 2006, 63 – 79). But the problems raised here are also ethical. The danger of this kind of futurism is that it radically devalues human choice and our collective ability to shape technological futures (Flew, 1967). Kurzweil’s account is remarkable in its blindness to the long history of the failure of technological foresight to deliver on its promises (Cole et al. 1974). I argue the brutal case that social foresight and technological forecasting are essentially fraudulent activities which at best are temporarily delusive but at worst may constitute a waste of valuable human and material resources. Following Edgerton’s (2006) account we need rather an ethics of ‘technology-in-use’ rather than a hypostatization of so called technological laws of development. I conclude more hopefully with a brief indication of where we might look for methods of dealing with uncertainty which do not depend on undependable and indefensible knowledge claims about future states of the world.

REFERENCES

Barrow, J.D., 1999. Impossibility: the limits of science and the science of limits. London: Vantage.

Cole, H.S.D., Freeman, C., Jahoda, M., and Pavitt, K.L.R., 1974. Thinking about the future: a critique of the Limits to Growth. London: Chatto and Windus.

Berlin, I., 1954. Historical inevitability. Oxford: Oxford University Press.

Edgerton, D., 2006. The shock of the old: technology and global history since 1900. London: Profile Books.

Flew, A. 1967. Evolutionary Ethics. London: Macmillan.

Horner. D.S., Anticipating ethical challenges: Is there a coming era of nanotechnology? Ethics and Information Technology. 7, 2005, pp. 127 – 138.

Horner, D.S., 2007a. Forecasting Ethics and the Ethics of Forecasting: the case of Nanotechnology. In: T.W. Bynum, K. Murata, and S. Rogerson, eds.Glocalisation: Bridging the Global Nature of Information and Communication Technology and the Local Nature of Human Beings. ETHICOMP 2007, Vol.1. Meiji University, Tokyo, Japan 27 -29 March 2007. Tokyo: Global e-SCM Research Centre, Meiji University, pp. 257-267.

Horner, D.S., 2007b. Digital futures: promising ethics and the ethics of promising. In: L. Hinman et al. eds. Proceedings of CEPE 2007: The 7th International Conference of Computer Ethics: Philosophical Enquiry. University of San Diego, July 12 – 14, 2007. Enschede, The Netherlands: Center for Telematics and Information Technology, pp. 194-204.

Kurzweil, R., 2005. The singularity is near: when humans transcend biology. London: Duckworth.

Selin, C., 2007. Expectations and emergence of nanotechnology. Science, Technology and Human Values. 32 (2) March, pp. 196 – 220.

Seidensticker, B., 2006. Futurehype: the myths of technology change. San Francisco: Barrett-Koehler.