Management of Cybercrime in Electronic Government: E-Awareness Training Model Implemented in India.

AUTHOR
Shalini Kesar, Ph.D.

ABSTRACT

Introduction

The main motivation for this paper comes from previous presentation at ETHCOMP2010 where key challenges in the area of cybercrime and Electronic Government (EGov) in India were highlighted. It was seen that one of the main challenges faced by government officials at local level was their lack of awareness about management of cybercrime in general. It was also found that the general perception of officials was that the implementation of technical controls can minimize the risks associated with cybercrime. On the contrary to this mindset, recent research indicates that technical controls alone will not help in management of cybercrime. In additional to technical controls, equal importance to formal and social issues associated with Information, Communications and Technologies (ICT) have to be taken into account. With this in mind, this paper takes the support of Kritzinger and von Solms (2010) model to develop an E-Awareness Training Model for EGov context in India. It specifically aims to understand how including technical, formal and social issues within this model can help in facilitating changes in the perceptions of government officials about management of cybercrime.

Cybercrime and EGov in India continue to be two important topics. The “push” to use ICT towards high impact EGov projects in India still aims to “transform the corporation’s commitment to be citizen centric, provide cost-effective services and enhance governance through improved access to accurate information and transparent and responsive democratic institutions”. On the other hand, the dependency and use of ICT brings new and dangerous cybercrime related risks. The existing vulnerability when using ICT in context of EGov, if not protected, will indeed continue to be of a significant concern.

Against this backdrop, this paper particularly focuses on one particular EGov project initiated within the Municipal Corporation (MC) in the west part of India . Municipal Corporation is one of the largest and leading Urban Local Governance Body . While safeguarding information, most of the reports on EGov projects within MC discuss key component of technical architecture that mostly centers on technical controls. Hence, it is hoped that initial findings of this paper will not only exemplify the importance technical, formal and social issues of ICT but will also contribute towards Indian EGov agenda to reform the project rationale in context of management of cybercrime.

Current Situation in EGov and Cybercrime in India

Overall the government of India’s vision is to provide an “economically vibrant and sustainable city with diverse opportunities and rich culture; where all citizens enjoy a safe environment with good connectivity”. National eGoveranace Plan in general aims to provide, for example, services to reach the locality and ensure efficiency, transparency & reliability of Services . Some of the MC’s agenda include promoting people centric administration and reducing delays and ensure promptness in delivery of services. Existing EGov MC projects in west part of India have always been on the forefront of ICT enablement of services and departments to render faster and more efficient services to the citizens .

On the other hand, statistics indicate that cybercrime in India in on a continual rise. For example, under the Information Technology Act, a total of 420 cases such as hacking computer systems or forging digital signatures were reported in 2009 (Indian Gazette, 2011 ). This has increased from only 142 reported cases in 2006. Other global reports such Crime Online reflect that growth of cybercrime in countries in India are of particular concern

Method

E-Awareness Model proposed by Kritzinger and von Solms (2010) suggests two key components: E-Awareness Portal and Regulating Services. The main function of the portal is to provide up to date contents regarding cybercrime risks within the EGov local level environment (see table below). This addresses and enhances awareness of government officials who are responsible for implementing EGov services. Recent work of Naavi in India strengthens the argument presented in this paper.
Social_Impact_Social_Comp_fig1

E-Awareness Training Model

Conclusion

Issues such as social, organizational and technological factors and problems pertain to Indian EGov are beginning to be recognized. Yet specific studies on management of cybercrime in this context still remain largely neglected.

REFERENCES

Kesar, S. “Has the Indian Government really thought about management of information systems security?”, ETHICOMP2010, Universitat Rovira i Virgili, Tarragona, Spain (2010).

Kritzinger, E. and von Solms, S. H. “Cyber security for home users: A new way of protection through awareness enforcement”. Computers & Security, 29 (8): 840-847 (2010).

Social Computing and the Fourth Revolution: Inforgs at the Barricades.

AUTHOR
David Sanford Horner

ABSTRACT

In a previous Ethicomp paper I criticised the continual resort to the language of ‘revolution’ to characterise the social and ethical impacts of the latest developments in information and communication technology (Horner, 2010). I argued that it may be worthwhile re-examining the apparently canonical assumption that ethical concerns are necessarily about radical novelty. In this paper I want to extend the discussion by examining the foundations of one specific and ‘revolutionary’ interpretation of the implications of social computing. I refer to the radical and influential account given by Luciano Floridi (Floridi, 2010). He argues that we are currently experiencing a Fourth Scientific and Technological Revolution which is transforming not only our view of the world but also our view of ourselves. Social computing is implicated as one of the symptoms of this transformation. Floridi puts ‘information’ and the concept of the ‘infosphere’ at the core of his analysis. He wants nothing less than for us to accept and conform our morality to the idea that ‘…the infosphere is Being considered informationally’ (Floridi, 2008, p.200). In this paper I want to show how in arriving at his system he makes, what seem to me to be, some fundamental philosophical errors and the consequences of these for his ethical system.

In the first section of the paper I will try and give a coherent picture of Floridi’s argument. This includes an account of what he means by the Fourth Revolution. This is particularly important given that he introduces some significant neologisms such as the terms ‘inforgs’ and ‘the infosphere’. More particularly in the context of thinking about social computing he develops the idea of ‘life in the infosphere’. He writes that: “The increasing informatization of artefacts and of whole (social) environments and life activities suggests that soon it will be difficult to understand what life was like in pre-informational times (to someone who was born in 2000, the world will always have been wireless, for example) and in the near future, the very distinction between online and offline will disappear.” (Floridi, 2010, p.16) So the ‘inforgs’ are at the informational barricades. He is after nothing less than ‘the reconceptualization of our metaphysics in informational terms’. Of course, from this informational base we get to Floridi’s very special interpretation of what we might mean by ‘information ethics’.

Now in the second section of the paper I do want to suggest that there is something very puzzling about all this. For example, the way in which Floridi inflates the meaning of ‘infosphere’ to include just about everything. I want to suggest that the root cause of the puzzlement is to do with how Floridi talks about, and deploys, the word ‘information’; there is something very profoundly wrong with his ‘conceptual plumbing’. A paradox here is that on the one hand Floridi recognises in the introduction to Information: a very short introduction (2010) that work on ‘the concept of information’ is still at a ‘lamentable stage’ but then goes on to map the concept in a highly misleading way. He tends to talk about information as though it was stuff; as though it was the name of something. Firstly, I want to follow Mary Midgley’s clue about this is kind of reductionist talk. It’s not really very helpful if I want to put my cup of tea down on a table if you tell me that tables are just bits of information in the infosphere. Information is just not a third kind of stuff at all. “It is an abstraction from them. Invoking such an extra stuff is as idle as any earlier talk of phlogiston or animal spirits or occult forces.” (Midgley, 2005, pp.66-67) Secondly, and probably even more importantly, there are just some mistakes about how we use language. I develop this point by reference to J.L.Austin’s analysis of ‘the meaning of a word’ (Austin, 1970). In his paper Austin shows how we get into a muddle by asking about ‘the meaning of a word’ particularly when we consider words like ‘real’, ‘good’ and so forth. Information it seems to me falls into this category. As Austin remarks “Even those who see pretty clearly that ‘concepts’, ‘abstract ideas’, and so on are fictitious entities, which we owe in part to asking questions about ‘the meaning of a word’, nevertheless themselves think that there is something which is ‘the meaning of a word’.”(Austin, 1970, p.60)

In the third section of the paper I draw out the implications for ethical analysis and show why all this is significant for ‘an ethics of social computing’. In this section then there will be a reflection on two recent cases where the ethical aspects of social computing were raised in important and acute forms. The point I wish to bring out is that Floridi’s analysis seems beside the point in coming to grips with a moral understanding of these actual cases. Nothing seems to be gained and in fact a lot is lost if we try to translate these cases into Floridi’s special ethical vocabulary. The first case concerns the murderer Raoul Moat. The social media figured in his crimes in that he issued threats on Facebook before committing the crimes and then several Facebook sites appeared in his support following the crimes and during the subsequent manhunt. In the second case, that of the murder of Joanna Yeates, social networking was used by her friends when Joanna first went missing to try and elicit leads on what had happened to her. It seems clear to me that we can perfectly well describe, understand and judge these cases in the moral language with which we are all familiar. I criticise Floridi’s system precisely because of the scope and strength of its claims. I suggest that by looking at where Floridi goes wrong we can get a better sense of what it means to go right in information and computer ethics.

REFERENCES

AUSTIN, J.L., 1970. The meaning of a word. In: J.L. Austin, Philosophical Papers. 2nd ed. Oxford: Oxford University Press.

FLORIDI, L., 2008. Information Ethics: a reappraisal. Ethics and Information Technology. 10, pp.189-204.

FLORIDI, L., 2010. Information: A very short introduction. Oxford: Oxford University Press.

HORNER, D. S., 2010. Metaphors in Orbit: revolution, logical malleability, generativity and the future of the Internet. In; Mario Arias-Oliva, et al., eds. Ethicomp 2010, Proceedings of the Eleventh International Conference, The ‘backwards, forwards and sideways’ changes of ICT. 14 – 16 April 2010. Universitat Rovira i Virgili, Tarragona, pp.301 -308.

MIDGLEY., M., 2005. The Myths We Live By. London: Routledge.

The problems with security and privacy in eGovernment – Case: Biometric Passports in Finland

AUTHOR
Olli I. Heimo , Antti Hakkala and Kai K. Kimppa

ABSTRACT

In this paper we discuss the problems that arise from the widespread adoption of biometric passports as travelling documents all around the world. This development has implications both in international and domestic context. The use of biometrics is not yet internationally standardized, and this can be seen in the ICAO[1] biometric passport standard[2], where inefficient compromises have been made. Side-effects from biometric passport adoption are seen throughout nations in discussion about centralized biometric databases. As biometric passports are only about 10 years old[3] – not mature as far as technologies go – and they have no clear analogy in the real world – the related ethical questions are harder to find, examine and analyze, and the consequences of the transition from regular to biometrically enhanced passports are yet totally unclear.

These consequences can be divided into direct and collateral. Among the direct consequences is lower security at borders due to inefficiency or errors in the system design. This can happen if 1) corners are cut in critical phases of the design process due to tight schedules and/or budget, 2) the security implementation is inadequate, or 3) the work processes in border security are understood incorrectly. Another direct consequence can be the erosion of document security. Although the data contained by the biometric passport chip is protected by several different methods, these security features have their own vulnerabilities[4,5,6]. This threat is visibly realized with automatic passport controls. If the trust is placed solely on the technology we might face a problem similar to the Munich taxi driver case: it was found that ABS brake systems did not reduce accidents, but increased close calls, as the drivers trusted the new brakes to compensate for careless driving[7]. Similar ill-placed trust in technology can be seen, if the professional skills and knowledge of a border official are replaced by automated systems without careful consideration. Collateral consequences can include identity theft[8] and the erosion of privacy of the people[9,10].

In Finland, the introduction of biometric passports took place in the first phase of the passport reform in 2006. At this time it was already planned that the second phase would incorporate fingerprints to the Finnish passport, in accordance to the EC Regulation No. 2252/2004[11]. In 2009, at the second phase of the Finnish passport reform, it was decided by the Parliament that the fingerprints gathered from passport applicants would be stored to a national fingerprint registry – an addition which the EC Regulation does not require[12]. During the legislation process the first step towards opening the registry to the police was the authorization to use it for indentifying the deceased. After this was adopted by the ministry, in the year 2008, the political debate for opening the registry started after police commissioner Markku Salminen and his successor Mikko Paatero both requested full access to the registry for serious and serial crime investigators[13,14]. These controversial demands were dismissed by the Parliament in 2009.

The discussion resurfaced in summer 2010, when Paatero renewed his claim.[15] This time the Minister of Internal Affairs gave a seemingly positive attitude towards police commissioner’s request[16]. After the discussion on opening the registry for forensic use gained a lot of attention in the media, all talks of the use of the national fingerprint registry were suspended, pending the next parliamentary elections in spring 2011[17,18,19]. There is no guarantee that the use of the fingerprint registry would not be extended to other than serious crime investigation as well. This classical “function creep” is a prime example of the erosion of privacy.

The need for security after 9/11 and other terrorist attacks following it, the international consensus of the need to identify the incoming travelers has never been higher, e.g. in Finland the Ministry of Internal Affairs promotes biometric passport to protect its citizens from international terrorism, illegal immigrants and international criminals[20].The recent scientific advancements in information technology and biometrics have created a possibility to fulfill this demand.

It is easy to understand the motivations behind the authorities’ interest in such centralized databases: solving serious crimes would be easier; however, this would cause inequality amongst those who possess a biometric passport and those who do not. If a national – or even international – database of fingerprints or other biometrics is used, it would probably increase biometric spoofing done by criminals; it is somewhat easy to copy and paste fingerprints[21] or leave the crime scene filled with human hair[22], for example. This could cause a serious amount of extra work for the police.

A common argument in the Finnish public discussion – from citizens and politicians alike – is, that no harm comes to law-abiding citizens just because mere fingerprints are found in a crime scene[23,24]. In international context, an example of such a situation can be found from the investigation of the 2004 Madrid bombings, where an innocent American citizen was erroneously identified by the FBI as an accomplice in the attack, based on the fingerprints found in forensic investigations. The Spanish police later connected the fingerprints to an Algerian citizen, and the FBI was forced to admit they had made a mistake[24]. Although an extreme example, this incident shows that, especially in high-profile cases to which serious crimes often belong, the pressure to produce results in the investigation can result in innocents marked as suspects with little to no actual evidence.

Some of the problems underlying the biometric passport control system can be easily found in other critical eGovernment and eHealth systems. These include detection of problems after adaption[26,27,28,29] extra costs[30] and extended delivery time of the whole system[31]. Some, but not all, of these problems can be mitigated or even eliminated outright if the mistakes made in previous large-scale projects of this kind are examined. The worst-case scenario for biometric passport misuse has not yet happened, but any sensible policy on biometric identification prepares for the day when it does; this is the aim of this paper.

REFERENCES

[1] International Civil Aviation Organization – http://www.icao.int

[2] ICAO MRTD documentation, http://www2.icao.int/en/MRTD/Pages/Downloads.aspx

[3] International Civil Aviation Organization (2006), Machine Readable Travel Documents, ICAO/Doc 9303 vol. 1, http://www2.icao.int/en/MRTD/Downloads/Doc%209303/Doc%209303%20English/Doc%209303%20Part%201%20Vol%201.pdf

[4] Serge Vaudenay , “E-Passport Threats,” IEEE Security & Privacy, vol.5, no.6, pp.61-64, Nov.-Dec. 2007

[5] Jaap-Henrik Hoepman, Engelbert Hubbers, Bart Jacobs, Martin Oostdijk, and Ronny Wichers Schreur, “Crossing Borders: Security and Privacy Issues of the European e-Passport”, Advances in Information and Computer Security, Lecture Notes in Computer Science, vol. 4266/2006, pages 152-167, Springer Berlin / Heidelberg, 2006.

[6] Gaurav S. Kc and Paul A. Karger, “Security and Privacy Issues in Machine Readable Travel Documents (MRTDs)”, IBM Technical Report RC 23575, 2005.

[7] Wilde, Gerald J.S. (1994), Target Risk: Dealing with the danger of death, disease and damage in everyday decisions, First edition 1994, http://psyc.queensu.ca/target/

[8] Alan Ramos, Weina Scott, William Scott, Doug Lloyd, Katherine O’Leary, and Jim Waldo. 2009. A threat analysis of RFID passports. Communications of the ACM 52, 12 (December 2009), 38-42.

[9] Ari Juels, David Molnar, and David Wagner, “Security and Privacy Issues in E-passports,” Security and Privacy for Emerging Areas in Communications Networks, International Conference on, pp. 74-88, First International Conference on Security and Privacy for Emerging Areas in Communications Networks (SECURECOMM’05), 2005

[10] Ben Schouten and Bart Jacobs, Biometrics and their use in e-passports, Image and Vision Computing, Volume 27, Issue 3, Special Issue on Multimodal Biometrics – Multimodal Biometrics Special Issue, 2 February 2009, Pages 305-312.

[11] The Council of the European Union, Council Regulation (EC) No 2252/2004, 13.12.2004, http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2004:385:0001:0006:EN:PDF

[12] Finnish Social Insurance Institution, Law service – Hallituksen esitys laiksi passilain ja eräiden siihen liittyvien lakien muuttamisesta [Government’s proposal for changing passport act and certain other related laws], 9.6.2009, http://www.edilex.fi/kela/fi/mt/havm20090009

[13] Helsingin Sanomat, 22.2.2008, 1st edition, Poliisi haluaa passien sormenjäljet rikostutkijoille [Police request passport fingerprints to criminal investigation]

[14] Helsingin Sanomat, 27.11.2008, 1st edition, Rikostutkijat eivät saa vielä passien sormenjälkiä käyttöönsä [Criminal investigators do not acquire passport fingerprints yet]

[15] Yle [Finnish public service broadcaster] – Kotimaa – Poliisi haluaa suomalaisten sormenjäljet rikostutkintaansa [Police requests Finnish fingerprints to criminal investigation], 02.08.2010 at 06:03, updated 03.08.2010 at 09:06 http://www.yle.fi/uutiset/kotimaa/2010/08/poliisi_haluaa_suomalaisten_sormenjaljet_rikostutkintaansa_1870808.html

[16] Tietokone 16.8.2010, Poliisi saattaa saada passien sormenjäljet [Police may acquire the passport fingerprints], http://www.tietokone.fi/uutiset/poliisi_saattaa_saada_passien_sormenjaljet

[17] C.f. 14

[18] C.f. 15

[19] STT/Helsingin Sanomat, 15.8.2010, Sunnuntaisuomalainen: Passien sormenjälkirekisteri voi avautua poliisille [Fingerprint registry may be opened to the police] http://www.hs.fi/kotimaa/artikkeli/Sunnuntaisuomalainen+Passien+sormenj%C3%A4lkirekisteri+voi+avautua+poliisille/1135259348892

[20] Sisäasiainministeriö [The Ministry of Internal Affairs] – Miksi tarvitaan biometrinen passi? [Why biometric passport is needed?] Sisäasiainministeriö, 2010. http://www.intermin.fi/intermin/hankkeet/biometria/home.nsf/pages/BE9BF3243D995FF5C2256EB7003B014B?opendocument

[21] Tsutomu Matsumoto, Hiroyuki Matsumoto, Koji Yamada, and Satoshi Hoshino. Impact of arti?cial gummy ?ngers on ?ngerprint systems. Proceedings of SPIE Vol.#4677, Optical Security and Counterfeit Deterrence Techniques IV, 2002.

[22] Gillam, Lee and Salmasi Anna Vartapetiance (2008), A Database For Fighting Crimes That Haven’t Been Committed Yet, Ethicomp 2008, Mantua, Italy 24.-26.9.2008.

[23] Sunnuntaisuomalainen 15.08.2010, Passipoliisit, p. 14

[24] Otakantaa.fi, Finnish Ministry of Justice, [An open electronic forum provided by the government for polling citizen opinions about new legislation], http://otakantaa.fi

[25] Michael Cherry ; Edward Imwinkelried (2006) Cautionary Note About Fingerprint Analysis and Reliance on Digital Technology. Judicature, Volume:89 Issue:6 May-June 2006 Pages:334 to 338, http://www.ajs.org/ajs/publications/Judicature_PDFs/896/Cherry_896.pdf

[26] Mercuri, Rebecca (2001), Electronic Vote Tabulation Checks and Balances, Ph.D. Thesis, University of Pennsylvania 2001

[27] William M. Fleischman (2010) Electronic Voting Systems and the Therac-25: What have we learned? Ethicomp 2010, Tarragona, Spain 14.-16.4.2010.

[28] Heimo, Olli I, Fairweather, N. Ben & Kimppa, Kai K. (2010), The Finnish eVoting Experiment: What Went Wrong?, Ethicomp 2010, Tarragona, Spain 14.-16.4.2010.

[29] Larsen E & Elligsen G. 2010. Facing the Lernaean Hydra: The Nature of Large-Scale Integration Projects in Healthcare. Proceedings of the First Scandinavian Conference of Information Systems, edited by Kautz K. & Nielsen P., SCIS 2010. Rebild, Denmark, August 2010.

[30] C.f. 26, 28, 29

[31] C.f. 26, 28, 29

Maintaining an ethical balance in the curriculum design of games-based degrees

AUTHOR
M.P. Jacob Habgood

ABSTRACT

Mainstream gaming studios in the UK generate global sales of around £1.7 billion a year from an industry which employs around 9,000 people in skilled game development roles (Kilpatrick, 2010). It is primarily the financial success and popularity of this industry which has driven the rise of games-based degrees in higher education. Nonetheless, games-based degrees are regularly criticised by members of the games industry as not being fit for purpose (e.g. French, 2008). Most recently they have come under specific scrutiny from an NESTA-backed education review headed up by Ian Livingstone, the President of Eidos (Livingstone and Hope, 2011). This set out to review the ability of the education system to fulfil skills shortages in the UK video games and visual effects industries and delivers a damning appraisal of the status quo. The report goes on to make a range of recommendations for improving the relevance of primary, secondary, further and higher education to the skills required by the video games and visual effects industries.

Sheffield Hallam University runs both undergraduate and postgraduate degree courses in games software development, and is in the enviable position of already meeting many of the report’s recommendations for these courses. They either already have, or are in the process of seeking industry accreditation and enjoy significant industry links¬–including full-time lecturing staff who have come from the games industry itself. Students are taught how to use industry-standard software and get the opportunity to work in inter-disciplinary teams using gaming hardware. The course even has its own student-resourced game studio developing commercial games for the PlayStation minis platform. Nevertheless, this paper will argue that the perspective provided by the Livingstone report fails to acknowledge the complex ethical considerations of designing a curriculum for games-based degrees.

Game-based degrees have an intrinsic appeal which naturally attracts students with a wide range of abilities and motivations for studying the degree. Many students enrolling on SHU’s games courses do so because they aspire to work in the mainstream video games industry and this provides much of the appeal of the course. However, students often arrive with significant misconceptions about the different roles and skillsets required to work in this industry. It is inevitable that not all of them will excel at the wide range of technical abilities demanded of them on the course and only the cream of each cohort will stand a realistic chance of being employed in the mainstream games industry. The remainder will need to apply the skills they have learned on their course to other industries and it would unethical to ignore the career paths of these students as part of the curriculum decisions made for the course.

Based on the Livingstone report, the industry’s solution to this would be to have a very limited number of industry-accredited “centres of excellence”, thus reducing the ‘surplus’ of graduates who are not capable of meeting the technical demands of such courses. However, this perspective seems to ignore the natural process of self-discovery which is a key part of the experience of higher education. Even the most competent students may find their interests evolve or change over the course of their studies. In particular the realisation that working in the games industry requires a higher level of technical competence, demands more unsocial working hours and pays less than other software industries is potentially enough to make even the most talented students reconsider their career aspirations.

This paper will provide a thorough review of the recommendations to higher education provided by the Livingstone report, using the SHU Games Software Development course as a case study. It will describe how we are meeting these recommendations and highlight the fine ethical balance required in making sure that the interests of the whole student body are balanced. It will also examine some of the recommendations to primary and secondary education made by the report. It will consider the ethical implications of a curriculum which puts a greater emphasis on Computer Science education and uses game development as a means of encouraging school students to study STEM subjects. Some practical observations based on previous research experience teaching game development at primary and secondary will be discussed as part of the ethical debate (Habgood et al., 2005).

REFERENCES

FRENCH, M. (2008) Sony’s Macdonald calls for educational Centres of Excellence. Develop Online. Hertford, Intent Media.

HABGOOD, M. P. J., AINSWORTH, S. & BENFORD, S. (2005) The educational content of digital games made by children. 2005 conference on Computer Aided Learning. Bristol, UK.

KILPATRICK, L. (2010) Business Sectors: Video and Computer games. London, Department for Business Innovation and Skills.

LIVINGSTONE, I. & HOPE, A. (2011) Next Gen. Transforming the UK into the world’s leading talent hub for the video games and visual effects industries. Bristol, NESTA.

Moral Responsibility for Computing Artifacts: “The Rules” and Issues of Trust

AUTHOR
FS Grodzinsky, K Miller and MJ Wolf

ABSTRACT

“The Rules” are a collaborative document (started in March 2010) that states principles for responsibility when a computer artifact is designed, developed and deployed into a sociotechnical system. At this writing, over 50 people from nine countries have signed onto The Rules. The Rules are available at https://edocs.uis.edu/kmill2/www/TheRules/.

Unlike most codes of ethics, The Rules are not tied to any organization, and computer users as well as computing professionals are invited to sign onto The Rules. The emphasis in The Rules is that both users and professionals have responsibilities in the production and use of computing artifacts. In this paper, we use The Rules to examine issues of trust.

Based on the theories of Floridi and Sanders (2001 and Floridi 2008), Grodzinsky, Miller and Wolf have used levels of abstraction to examine ethical issues created by computing technology (see Grodzinsky et al. 2008 and Wolf, et al. 2011). They used three levels of abstraction in that analysis: LoA1, the users’ perspective; LoA2, the developers’ perspective; and LoAS, the perspective of society at large. Their analysis of quantum computing and cloud computing focused on computing professionals at LoA2 delivering functionality to users at LoA1 (Wolf et al. 2011). Their emphasis was on the professionals being worthy of the trust of users in that delivery.

Our analysis of The Rules differs from the earlier analyses of quantum and cloud computing. The Rules are not a computing paradigm; they are a paradigm for thinking about the impact of computing artifacts. The emphasis in The Rules is different from a technical computing project: both users and professionals are invited to acknowledge their responsibilities in the production and use of computing artifacts. Yet there are some aspects of the earlier analyses, especially in the area of trust, that are relevant to The Rules. In quantum computing, although the implementers of quantum algorithms will not likely meet most of the users of those algorithms, nor communicate with them, the trust relationship will be forged through the medium of the quantum algorithms. The whole point of cloud computing is that the people who maintain the computing resources of cloud users are remote from the users of those resources. Humans are clearly crucial in the sociotechnical systems of cloud computing. But most of the relationships will be based on e-trust, not on face-to-face interactions. Trust issues are complex in these new computing paradigms, and it is our assertion that The Rules can inform a discussion of these issues.

The first part of this paper presents The Rules. The Rules document currently includes five rules that are intended to serve “as a normative guide for people who design, develop, deploy, evaluate or use computing artifacts.” Next we briefly examine a model of trust and the relationship between The Rules and society through the lens of trust. In other words, we will examine how computing artifacts and the sociotechnical system of which they are a part, serve as a medium through which trust relationships are played out. Then, we shall examine each rule vis a vis the sociotechnical system and trust. The existence and proliferation of computing artifacts and the growing sophistication of sociotechnical systems do not insulate users and developers from the need to trust and the obligation to be trustworthy. Instead, we are convinced that the power and complexity of these systems require us to be more dependent on trust relationships, not less. In the last section of the paper we illustrate this last statement by applying the Rules to the paradigms of quantum and cloud computing especially as they relate to issues of trust between developers and users within sociotechnical systems.

REFERENCES

Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18:303-329. doi:10.0007/s11023-008-9113-7.

Floridi, L. and J.W. Sanders (2001). Artificial evil and the foundation of computer ethics. Ethics and Information Technology 3:55–66.

Grodzinsky, F. S., Miller, K. and Wolf, M. J. (2008) The ethics of designing artificial agents. Ethics and Information Technology, 10, 2-3, (September, 2008), DOI: 10.1007/s10676-008-9163-9.

Grodzinsky, F. S., Miller, K. and Wolf, M. J. (2011) Developing artificial agents worthy of trust: Would you buy a car from this artificial agent? Forthcoming in Ethics and Information Technology.

Joy, Bill (2000) Why the future doesn’t need us. Wired (8), no. (4) 2000.

Nissenbaum, Helen (2007) Computing and accountability. in J. Weckert, ed. Computer Ethics. Aldershot UK: Ashgate, pp. 273-80. Reprinted from Communications of the ACM 37(1994):37-40.

Taddeo, M. (2008) Modeling trust in artificial agents, a first step toward the analysis of e-trust. In Sixth European Conference of Computing and Philosophy, University for Science and Technology, Montpelier, France, 16-18 June.

Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5, 2, April-June 2009.

Weizenbaum, Joseph (1984). Computer Power and Human Reason:From Judgment to Calculation. New York: Penguin Books.

Wolf, M.J., Grodzinsky, F. and Miller, K. (2010) Artificial agents, cloud computing, and quantum computing: Applying Floridi’s Method of levels of abstraction. To appear in Luciano Floridi’s Philosophy of Technology: Critical Reflections, H. Demir, ed. Springer, forthcoming in 2011.

Listening as a tool for democracy in the age of Social Computing

AUTHOR
Krystyna Górniak-Kocikowska

ABSTRACT

The evolution of computer technology is amazing and breathtaking. Barely thirty years ago, computers were perceived mainly as ‘number crunchers;’ scholarly papers (Moor, 1985) were written to argue that these devices have a much broader potential. The development was so rapid that there was a problem with finding an adequate name for the new technology – from computer or digital technology to information technology to information and communication technology (Górniak-Kocikowska, 2005). These changing names reflected the direction in which the computer-based technology was evolving. The term social computing, used as one of the focal terms for the ETHICOMP 2011 conference, points out an additional step in this evolution. It indicates that presently the various applications of computer technology take the central stage in characterizing the technology itself; social computing being merely the most noticeable among them.

The recent popularity of social computing also brings a wide range of new problems, theoretical and practical alike. The social impact of social computing is possibly the most important among them. This paper will focus on one of the problems in the social impact area, namely, the problem of verbal communication, which is the core of social computing. Within the scope of verbal communication, the focus will be chiefly on the problem of listening.

In every meaningful and purposeful form of communication there are two main ‘players’ whether individual or collective: the sender and the recipient. (Often, they switch roles from sender(s) to recipient(s) and vice versa.) In verbal communication, the sender is usually known as a ‘speaker,’ whereas the recipient as a ‘listener’ even when the communication has a written, not an oral form. Usually, the speaker is seen as an active participant in the communication process, whereas the listener as a passive one. This distinction applies mostly to the external (visible and audible) characteristics of the communication process. In terms of internal characteristics, esp. regarding thought processes, the ‘listener’ can be as active as the speaker or even more so. This, however, rarely has a discernible impact on the process of communication at the time when this process is taking place.

Despite the existence of two processes (‘speaking’ and ‘listening’) and two participants (‘speaker’ and ‘listener’) in the phenomenon of verbal communication the interest of western scholars in ‘speaking’ far exceeds their interest in ‘listening.’ Corradi Fiumara maintains that this neglect of listening is the result of the dominance of logos and logical thinking in the western philosophical tradition. She further claims that logical thinking is “primary anchored to saying-without-listening.” (Corradi Fiumara, 1990, 3)

In the speaking-centered, not listening-centered western intellectual tradition, the primary purpose of communication is frequently the speaker’s victory and domination rather than mutual understanding and/or existential insight. In the logos-centered paradigm, the ‘speaker’s’ objective is usually to ‘prove,’ to ‘convince,’ to ‘make one understand,’ to ‘make one follow the speaker’ (the ‘speaker’s’ words, and sometimes also deeds). The ‘listener’ is supposed to pay attention, to remember, to follow the ‘speaker,’ and so on. Phrases like ‘listen to me’ more often than not mean ‘obey me.’ Besides establishing the position of the ‘listener’ as a subordinate one, such phrases indicate also that the role ascribed to the ‘listener’ in the communication process is a passive one. Consequently, fulfilling someone’s orders swiftly and accurately or acting by taking into account facts one has been informed about is often seen as proof of effective listening. But this is just one kind of listening, and it is not the most important one in the context of the social impact of social computing. Therefore, one of the issues raised in this paper will be the problem of ‘the will to listen’ (without which any meaningful communication is all but impossible). ‘The will to listen’ means that one has to have ‘the will to think’ first; ‘the will to obey’ or the ‘the will to follow one’s footsteps’ can ensue – or not.

One of the most prominent philosophers interested in the problem of listening, especially in the context of democracy and education, was John Dewey. Leonard J. Waks (2009) claims that the core of Dewey’s theory on listening is the distinction between “one-way or straight-line listening” (dominant in both traditional schools and undemocratic societies) and “transactional listening-in-conversation,” which “lies at the heart of democracy.”

Today, various academic disciplines, especially psychology, education, medicine, and marketing, pay significant amount of attention to the issue of listening. They all developed their own theories regarding this problem and approach it from their own particular perspectives. Even so, and even with the existence of professional organizations, e.g., The International Listening Association, and a multitude of on- and off-line publications, including specialized scholarly journals, there seems to still be an insufficient investigation of the problem of listening as an act of communication; in particular in the context of social computing which is now a global phenomenon. Global social computing can contribute to profound changes in the way the humankind deals with its own problems and with the problems of their environment. Therefore, advancing the understanding of listening and modifying our current approach to it should be one of our most urgent tasks.

REFERENCES

Corradi Fiumara, Gemma (1990), The Other Side of Language: A philosophy of listening, transl. by Charles Lambert, Routledge.

Górniak-Kocikowska, Krystyna (2005), “Problem z nazwaniem nowego globalnego spoleczenstwa” [Problems with the naming of the new global society], Osoba w Spoleczenstwie Informacyjnym, ETHOS, John Paul II Institute Catholic University of Lublin, John Paul II Foundation Rome, Vol. 69-70, 77-99.

International Listening Association (last accessed on January 29, 2011), http://www.listen.org/

Moor, James H. (1985). “What is computer ethics?” Metaphilosophy,16 (4), pp 226-275.

Waks, Leonard J. (2009), Hearing is a participation: John Dewey on listening, friendship and participation in democratic society, Manuscript.