Social Computing and the Fourth Revolution: Inforgs at the Barricades.

AUTHOR
David Sanford Horner

ABSTRACT

In a previous Ethicomp paper I criticised the continual resort to the language of ‘revolution’ to characterise the social and ethical impacts of the latest developments in information and communication technology (Horner, 2010). I argued that it may be worthwhile re-examining the apparently canonical assumption that ethical concerns are necessarily about radical novelty. In this paper I want to extend the discussion by examining the foundations of one specific and ‘revolutionary’ interpretation of the implications of social computing. I refer to the radical and influential account given by Luciano Floridi (Floridi, 2010). He argues that we are currently experiencing a Fourth Scientific and Technological Revolution which is transforming not only our view of the world but also our view of ourselves. Social computing is implicated as one of the symptoms of this transformation. Floridi puts ‘information’ and the concept of the ‘infosphere’ at the core of his analysis. He wants nothing less than for us to accept and conform our morality to the idea that ‘…the infosphere is Being considered informationally’ (Floridi, 2008, p.200). In this paper I want to show how in arriving at his system he makes, what seem to me to be, some fundamental philosophical errors and the consequences of these for his ethical system.

In the first section of the paper I will try and give a coherent picture of Floridi’s argument. This includes an account of what he means by the Fourth Revolution. This is particularly important given that he introduces some significant neologisms such as the terms ‘inforgs’ and ‘the infosphere’. More particularly in the context of thinking about social computing he develops the idea of ‘life in the infosphere’. He writes that: “The increasing informatization of artefacts and of whole (social) environments and life activities suggests that soon it will be difficult to understand what life was like in pre-informational times (to someone who was born in 2000, the world will always have been wireless, for example) and in the near future, the very distinction between online and offline will disappear.” (Floridi, 2010, p.16) So the ‘inforgs’ are at the informational barricades. He is after nothing less than ‘the reconceptualization of our metaphysics in informational terms’. Of course, from this informational base we get to Floridi’s very special interpretation of what we might mean by ‘information ethics’.

Now in the second section of the paper I do want to suggest that there is something very puzzling about all this. For example, the way in which Floridi inflates the meaning of ‘infosphere’ to include just about everything. I want to suggest that the root cause of the puzzlement is to do with how Floridi talks about, and deploys, the word ‘information’; there is something very profoundly wrong with his ‘conceptual plumbing’. A paradox here is that on the one hand Floridi recognises in the introduction to Information: a very short introduction (2010) that work on ‘the concept of information’ is still at a ‘lamentable stage’ but then goes on to map the concept in a highly misleading way. He tends to talk about information as though it was stuff; as though it was the name of something. Firstly, I want to follow Mary Midgley’s clue about this is kind of reductionist talk. It’s not really very helpful if I want to put my cup of tea down on a table if you tell me that tables are just bits of information in the infosphere. Information is just not a third kind of stuff at all. “It is an abstraction from them. Invoking such an extra stuff is as idle as any earlier talk of phlogiston or animal spirits or occult forces.” (Midgley, 2005, pp.66-67) Secondly, and probably even more importantly, there are just some mistakes about how we use language. I develop this point by reference to J.L.Austin’s analysis of ‘the meaning of a word’ (Austin, 1970). In his paper Austin shows how we get into a muddle by asking about ‘the meaning of a word’ particularly when we consider words like ‘real’, ‘good’ and so forth. Information it seems to me falls into this category. As Austin remarks “Even those who see pretty clearly that ‘concepts’, ‘abstract ideas’, and so on are fictitious entities, which we owe in part to asking questions about ‘the meaning of a word’, nevertheless themselves think that there is something which is ‘the meaning of a word’.”(Austin, 1970, p.60)

In the third section of the paper I draw out the implications for ethical analysis and show why all this is significant for ‘an ethics of social computing’. In this section then there will be a reflection on two recent cases where the ethical aspects of social computing were raised in important and acute forms. The point I wish to bring out is that Floridi’s analysis seems beside the point in coming to grips with a moral understanding of these actual cases. Nothing seems to be gained and in fact a lot is lost if we try to translate these cases into Floridi’s special ethical vocabulary. The first case concerns the murderer Raoul Moat. The social media figured in his crimes in that he issued threats on Facebook before committing the crimes and then several Facebook sites appeared in his support following the crimes and during the subsequent manhunt. In the second case, that of the murder of Joanna Yeates, social networking was used by her friends when Joanna first went missing to try and elicit leads on what had happened to her. It seems clear to me that we can perfectly well describe, understand and judge these cases in the moral language with which we are all familiar. I criticise Floridi’s system precisely because of the scope and strength of its claims. I suggest that by looking at where Floridi goes wrong we can get a better sense of what it means to go right in information and computer ethics.

REFERENCES

AUSTIN, J.L., 1970. The meaning of a word. In: J.L. Austin, Philosophical Papers. 2nd ed. Oxford: Oxford University Press.

FLORIDI, L., 2008. Information Ethics: a reappraisal. Ethics and Information Technology. 10, pp.189-204.

FLORIDI, L., 2010. Information: A very short introduction. Oxford: Oxford University Press.

HORNER, D. S., 2010. Metaphors in Orbit: revolution, logical malleability, generativity and the future of the Internet. In; Mario Arias-Oliva, et al., eds. Ethicomp 2010, Proceedings of the Eleventh International Conference, The ‘backwards, forwards and sideways’ changes of ICT. 14 – 16 April 2010. Universitat Rovira i Virgili, Tarragona, pp.301 -308.

MIDGLEY., M., 2005. The Myths We Live By. London: Routledge.

The problems with security and privacy in eGovernment – Case: Biometric Passports in Finland

AUTHOR
Olli I. Heimo , Antti Hakkala and Kai K. Kimppa

ABSTRACT

In this paper we discuss the problems that arise from the widespread adoption of biometric passports as travelling documents all around the world. This development has implications both in international and domestic context. The use of biometrics is not yet internationally standardized, and this can be seen in the ICAO[1] biometric passport standard[2], where inefficient compromises have been made. Side-effects from biometric passport adoption are seen throughout nations in discussion about centralized biometric databases. As biometric passports are only about 10 years old[3] – not mature as far as technologies go – and they have no clear analogy in the real world – the related ethical questions are harder to find, examine and analyze, and the consequences of the transition from regular to biometrically enhanced passports are yet totally unclear.

These consequences can be divided into direct and collateral. Among the direct consequences is lower security at borders due to inefficiency or errors in the system design. This can happen if 1) corners are cut in critical phases of the design process due to tight schedules and/or budget, 2) the security implementation is inadequate, or 3) the work processes in border security are understood incorrectly. Another direct consequence can be the erosion of document security. Although the data contained by the biometric passport chip is protected by several different methods, these security features have their own vulnerabilities[4,5,6]. This threat is visibly realized with automatic passport controls. If the trust is placed solely on the technology we might face a problem similar to the Munich taxi driver case: it was found that ABS brake systems did not reduce accidents, but increased close calls, as the drivers trusted the new brakes to compensate for careless driving[7]. Similar ill-placed trust in technology can be seen, if the professional skills and knowledge of a border official are replaced by automated systems without careful consideration. Collateral consequences can include identity theft[8] and the erosion of privacy of the people[9,10].

In Finland, the introduction of biometric passports took place in the first phase of the passport reform in 2006. At this time it was already planned that the second phase would incorporate fingerprints to the Finnish passport, in accordance to the EC Regulation No. 2252/2004[11]. In 2009, at the second phase of the Finnish passport reform, it was decided by the Parliament that the fingerprints gathered from passport applicants would be stored to a national fingerprint registry – an addition which the EC Regulation does not require[12]. During the legislation process the first step towards opening the registry to the police was the authorization to use it for indentifying the deceased. After this was adopted by the ministry, in the year 2008, the political debate for opening the registry started after police commissioner Markku Salminen and his successor Mikko Paatero both requested full access to the registry for serious and serial crime investigators[13,14]. These controversial demands were dismissed by the Parliament in 2009.

The discussion resurfaced in summer 2010, when Paatero renewed his claim.[15] This time the Minister of Internal Affairs gave a seemingly positive attitude towards police commissioner’s request[16]. After the discussion on opening the registry for forensic use gained a lot of attention in the media, all talks of the use of the national fingerprint registry were suspended, pending the next parliamentary elections in spring 2011[17,18,19]. There is no guarantee that the use of the fingerprint registry would not be extended to other than serious crime investigation as well. This classical “function creep” is a prime example of the erosion of privacy.

The need for security after 9/11 and other terrorist attacks following it, the international consensus of the need to identify the incoming travelers has never been higher, e.g. in Finland the Ministry of Internal Affairs promotes biometric passport to protect its citizens from international terrorism, illegal immigrants and international criminals[20].The recent scientific advancements in information technology and biometrics have created a possibility to fulfill this demand.

It is easy to understand the motivations behind the authorities’ interest in such centralized databases: solving serious crimes would be easier; however, this would cause inequality amongst those who possess a biometric passport and those who do not. If a national – or even international – database of fingerprints or other biometrics is used, it would probably increase biometric spoofing done by criminals; it is somewhat easy to copy and paste fingerprints[21] or leave the crime scene filled with human hair[22], for example. This could cause a serious amount of extra work for the police.

A common argument in the Finnish public discussion – from citizens and politicians alike – is, that no harm comes to law-abiding citizens just because mere fingerprints are found in a crime scene[23,24]. In international context, an example of such a situation can be found from the investigation of the 2004 Madrid bombings, where an innocent American citizen was erroneously identified by the FBI as an accomplice in the attack, based on the fingerprints found in forensic investigations. The Spanish police later connected the fingerprints to an Algerian citizen, and the FBI was forced to admit they had made a mistake[24]. Although an extreme example, this incident shows that, especially in high-profile cases to which serious crimes often belong, the pressure to produce results in the investigation can result in innocents marked as suspects with little to no actual evidence.

Some of the problems underlying the biometric passport control system can be easily found in other critical eGovernment and eHealth systems. These include detection of problems after adaption[26,27,28,29] extra costs[30] and extended delivery time of the whole system[31]. Some, but not all, of these problems can be mitigated or even eliminated outright if the mistakes made in previous large-scale projects of this kind are examined. The worst-case scenario for biometric passport misuse has not yet happened, but any sensible policy on biometric identification prepares for the day when it does; this is the aim of this paper.

REFERENCES

[1] International Civil Aviation Organization – http://www.icao.int

[2] ICAO MRTD documentation, http://www2.icao.int/en/MRTD/Pages/Downloads.aspx

[3] International Civil Aviation Organization (2006), Machine Readable Travel Documents, ICAO/Doc 9303 vol. 1, http://www2.icao.int/en/MRTD/Downloads/Doc%209303/Doc%209303%20English/Doc%209303%20Part%201%20Vol%201.pdf

[4] Serge Vaudenay , “E-Passport Threats,” IEEE Security & Privacy, vol.5, no.6, pp.61-64, Nov.-Dec. 2007

[5] Jaap-Henrik Hoepman, Engelbert Hubbers, Bart Jacobs, Martin Oostdijk, and Ronny Wichers Schreur, “Crossing Borders: Security and Privacy Issues of the European e-Passport”, Advances in Information and Computer Security, Lecture Notes in Computer Science, vol. 4266/2006, pages 152-167, Springer Berlin / Heidelberg, 2006.

[6] Gaurav S. Kc and Paul A. Karger, “Security and Privacy Issues in Machine Readable Travel Documents (MRTDs)”, IBM Technical Report RC 23575, 2005.

[7] Wilde, Gerald J.S. (1994), Target Risk: Dealing with the danger of death, disease and damage in everyday decisions, First edition 1994, http://psyc.queensu.ca/target/

[8] Alan Ramos, Weina Scott, William Scott, Doug Lloyd, Katherine O’Leary, and Jim Waldo. 2009. A threat analysis of RFID passports. Communications of the ACM 52, 12 (December 2009), 38-42.

[9] Ari Juels, David Molnar, and David Wagner, “Security and Privacy Issues in E-passports,” Security and Privacy for Emerging Areas in Communications Networks, International Conference on, pp. 74-88, First International Conference on Security and Privacy for Emerging Areas in Communications Networks (SECURECOMM’05), 2005

[10] Ben Schouten and Bart Jacobs, Biometrics and their use in e-passports, Image and Vision Computing, Volume 27, Issue 3, Special Issue on Multimodal Biometrics – Multimodal Biometrics Special Issue, 2 February 2009, Pages 305-312.

[11] The Council of the European Union, Council Regulation (EC) No 2252/2004, 13.12.2004, http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2004:385:0001:0006:EN:PDF

[12] Finnish Social Insurance Institution, Law service – Hallituksen esitys laiksi passilain ja eräiden siihen liittyvien lakien muuttamisesta [Government’s proposal for changing passport act and certain other related laws], 9.6.2009, http://www.edilex.fi/kela/fi/mt/havm20090009

[13] Helsingin Sanomat, 22.2.2008, 1st edition, Poliisi haluaa passien sormenjäljet rikostutkijoille [Police request passport fingerprints to criminal investigation]

[14] Helsingin Sanomat, 27.11.2008, 1st edition, Rikostutkijat eivät saa vielä passien sormenjälkiä käyttöönsä [Criminal investigators do not acquire passport fingerprints yet]

[15] Yle [Finnish public service broadcaster] – Kotimaa – Poliisi haluaa suomalaisten sormenjäljet rikostutkintaansa [Police requests Finnish fingerprints to criminal investigation], 02.08.2010 at 06:03, updated 03.08.2010 at 09:06 http://www.yle.fi/uutiset/kotimaa/2010/08/poliisi_haluaa_suomalaisten_sormenjaljet_rikostutkintaansa_1870808.html

[16] Tietokone 16.8.2010, Poliisi saattaa saada passien sormenjäljet [Police may acquire the passport fingerprints], http://www.tietokone.fi/uutiset/poliisi_saattaa_saada_passien_sormenjaljet

[17] C.f. 14

[18] C.f. 15

[19] STT/Helsingin Sanomat, 15.8.2010, Sunnuntaisuomalainen: Passien sormenjälkirekisteri voi avautua poliisille [Fingerprint registry may be opened to the police] http://www.hs.fi/kotimaa/artikkeli/Sunnuntaisuomalainen+Passien+sormenj%C3%A4lkirekisteri+voi+avautua+poliisille/1135259348892

[20] Sisäasiainministeriö [The Ministry of Internal Affairs] – Miksi tarvitaan biometrinen passi? [Why biometric passport is needed?] Sisäasiainministeriö, 2010. http://www.intermin.fi/intermin/hankkeet/biometria/home.nsf/pages/BE9BF3243D995FF5C2256EB7003B014B?opendocument

[21] Tsutomu Matsumoto, Hiroyuki Matsumoto, Koji Yamada, and Satoshi Hoshino. Impact of arti?cial gummy ?ngers on ?ngerprint systems. Proceedings of SPIE Vol.#4677, Optical Security and Counterfeit Deterrence Techniques IV, 2002.

[22] Gillam, Lee and Salmasi Anna Vartapetiance (2008), A Database For Fighting Crimes That Haven’t Been Committed Yet, Ethicomp 2008, Mantua, Italy 24.-26.9.2008.

[23] Sunnuntaisuomalainen 15.08.2010, Passipoliisit, p. 14

[24] Otakantaa.fi, Finnish Ministry of Justice, [An open electronic forum provided by the government for polling citizen opinions about new legislation], http://otakantaa.fi

[25] Michael Cherry ; Edward Imwinkelried (2006) Cautionary Note About Fingerprint Analysis and Reliance on Digital Technology. Judicature, Volume:89 Issue:6 May-June 2006 Pages:334 to 338, http://www.ajs.org/ajs/publications/Judicature_PDFs/896/Cherry_896.pdf

[26] Mercuri, Rebecca (2001), Electronic Vote Tabulation Checks and Balances, Ph.D. Thesis, University of Pennsylvania 2001

[27] William M. Fleischman (2010) Electronic Voting Systems and the Therac-25: What have we learned? Ethicomp 2010, Tarragona, Spain 14.-16.4.2010.

[28] Heimo, Olli I, Fairweather, N. Ben & Kimppa, Kai K. (2010), The Finnish eVoting Experiment: What Went Wrong?, Ethicomp 2010, Tarragona, Spain 14.-16.4.2010.

[29] Larsen E & Elligsen G. 2010. Facing the Lernaean Hydra: The Nature of Large-Scale Integration Projects in Healthcare. Proceedings of the First Scandinavian Conference of Information Systems, edited by Kautz K. & Nielsen P., SCIS 2010. Rebild, Denmark, August 2010.

[30] C.f. 26, 28, 29

[31] C.f. 26, 28, 29

Maintaining an ethical balance in the curriculum design of games-based degrees

AUTHOR
M.P. Jacob Habgood

ABSTRACT

Mainstream gaming studios in the UK generate global sales of around £1.7 billion a year from an industry which employs around 9,000 people in skilled game development roles (Kilpatrick, 2010). It is primarily the financial success and popularity of this industry which has driven the rise of games-based degrees in higher education. Nonetheless, games-based degrees are regularly criticised by members of the games industry as not being fit for purpose (e.g. French, 2008). Most recently they have come under specific scrutiny from an NESTA-backed education review headed up by Ian Livingstone, the President of Eidos (Livingstone and Hope, 2011). This set out to review the ability of the education system to fulfil skills shortages in the UK video games and visual effects industries and delivers a damning appraisal of the status quo. The report goes on to make a range of recommendations for improving the relevance of primary, secondary, further and higher education to the skills required by the video games and visual effects industries.

Sheffield Hallam University runs both undergraduate and postgraduate degree courses in games software development, and is in the enviable position of already meeting many of the report’s recommendations for these courses. They either already have, or are in the process of seeking industry accreditation and enjoy significant industry links¬–including full-time lecturing staff who have come from the games industry itself. Students are taught how to use industry-standard software and get the opportunity to work in inter-disciplinary teams using gaming hardware. The course even has its own student-resourced game studio developing commercial games for the PlayStation minis platform. Nevertheless, this paper will argue that the perspective provided by the Livingstone report fails to acknowledge the complex ethical considerations of designing a curriculum for games-based degrees.

Game-based degrees have an intrinsic appeal which naturally attracts students with a wide range of abilities and motivations for studying the degree. Many students enrolling on SHU’s games courses do so because they aspire to work in the mainstream video games industry and this provides much of the appeal of the course. However, students often arrive with significant misconceptions about the different roles and skillsets required to work in this industry. It is inevitable that not all of them will excel at the wide range of technical abilities demanded of them on the course and only the cream of each cohort will stand a realistic chance of being employed in the mainstream games industry. The remainder will need to apply the skills they have learned on their course to other industries and it would unethical to ignore the career paths of these students as part of the curriculum decisions made for the course.

Based on the Livingstone report, the industry’s solution to this would be to have a very limited number of industry-accredited “centres of excellence”, thus reducing the ‘surplus’ of graduates who are not capable of meeting the technical demands of such courses. However, this perspective seems to ignore the natural process of self-discovery which is a key part of the experience of higher education. Even the most competent students may find their interests evolve or change over the course of their studies. In particular the realisation that working in the games industry requires a higher level of technical competence, demands more unsocial working hours and pays less than other software industries is potentially enough to make even the most talented students reconsider their career aspirations.

This paper will provide a thorough review of the recommendations to higher education provided by the Livingstone report, using the SHU Games Software Development course as a case study. It will describe how we are meeting these recommendations and highlight the fine ethical balance required in making sure that the interests of the whole student body are balanced. It will also examine some of the recommendations to primary and secondary education made by the report. It will consider the ethical implications of a curriculum which puts a greater emphasis on Computer Science education and uses game development as a means of encouraging school students to study STEM subjects. Some practical observations based on previous research experience teaching game development at primary and secondary will be discussed as part of the ethical debate (Habgood et al., 2005).

REFERENCES

FRENCH, M. (2008) Sony’s Macdonald calls for educational Centres of Excellence. Develop Online. Hertford, Intent Media.

HABGOOD, M. P. J., AINSWORTH, S. & BENFORD, S. (2005) The educational content of digital games made by children. 2005 conference on Computer Aided Learning. Bristol, UK.

KILPATRICK, L. (2010) Business Sectors: Video and Computer games. London, Department for Business Innovation and Skills.

LIVINGSTONE, I. & HOPE, A. (2011) Next Gen. Transforming the UK into the world’s leading talent hub for the video games and visual effects industries. Bristol, NESTA.

Moral Responsibility for Computing Artifacts: “The Rules” and Issues of Trust

AUTHOR
FS Grodzinsky, K Miller and MJ Wolf

ABSTRACT

“The Rules” are a collaborative document (started in March 2010) that states principles for responsibility when a computer artifact is designed, developed and deployed into a sociotechnical system. At this writing, over 50 people from nine countries have signed onto The Rules. The Rules are available at https://edocs.uis.edu/kmill2/www/TheRules/.

Unlike most codes of ethics, The Rules are not tied to any organization, and computer users as well as computing professionals are invited to sign onto The Rules. The emphasis in The Rules is that both users and professionals have responsibilities in the production and use of computing artifacts. In this paper, we use The Rules to examine issues of trust.

Based on the theories of Floridi and Sanders (2001 and Floridi 2008), Grodzinsky, Miller and Wolf have used levels of abstraction to examine ethical issues created by computing technology (see Grodzinsky et al. 2008 and Wolf, et al. 2011). They used three levels of abstraction in that analysis: LoA1, the users’ perspective; LoA2, the developers’ perspective; and LoAS, the perspective of society at large. Their analysis of quantum computing and cloud computing focused on computing professionals at LoA2 delivering functionality to users at LoA1 (Wolf et al. 2011). Their emphasis was on the professionals being worthy of the trust of users in that delivery.

Our analysis of The Rules differs from the earlier analyses of quantum and cloud computing. The Rules are not a computing paradigm; they are a paradigm for thinking about the impact of computing artifacts. The emphasis in The Rules is different from a technical computing project: both users and professionals are invited to acknowledge their responsibilities in the production and use of computing artifacts. Yet there are some aspects of the earlier analyses, especially in the area of trust, that are relevant to The Rules. In quantum computing, although the implementers of quantum algorithms will not likely meet most of the users of those algorithms, nor communicate with them, the trust relationship will be forged through the medium of the quantum algorithms. The whole point of cloud computing is that the people who maintain the computing resources of cloud users are remote from the users of those resources. Humans are clearly crucial in the sociotechnical systems of cloud computing. But most of the relationships will be based on e-trust, not on face-to-face interactions. Trust issues are complex in these new computing paradigms, and it is our assertion that The Rules can inform a discussion of these issues.

The first part of this paper presents The Rules. The Rules document currently includes five rules that are intended to serve “as a normative guide for people who design, develop, deploy, evaluate or use computing artifacts.” Next we briefly examine a model of trust and the relationship between The Rules and society through the lens of trust. In other words, we will examine how computing artifacts and the sociotechnical system of which they are a part, serve as a medium through which trust relationships are played out. Then, we shall examine each rule vis a vis the sociotechnical system and trust. The existence and proliferation of computing artifacts and the growing sophistication of sociotechnical systems do not insulate users and developers from the need to trust and the obligation to be trustworthy. Instead, we are convinced that the power and complexity of these systems require us to be more dependent on trust relationships, not less. In the last section of the paper we illustrate this last statement by applying the Rules to the paradigms of quantum and cloud computing especially as they relate to issues of trust between developers and users within sociotechnical systems.

REFERENCES

Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18:303-329. doi:10.0007/s11023-008-9113-7.

Floridi, L. and J.W. Sanders (2001). Artificial evil and the foundation of computer ethics. Ethics and Information Technology 3:55–66.

Grodzinsky, F. S., Miller, K. and Wolf, M. J. (2008) The ethics of designing artificial agents. Ethics and Information Technology, 10, 2-3, (September, 2008), DOI: 10.1007/s10676-008-9163-9.

Grodzinsky, F. S., Miller, K. and Wolf, M. J. (2011) Developing artificial agents worthy of trust: Would you buy a car from this artificial agent? Forthcoming in Ethics and Information Technology.

Joy, Bill (2000) Why the future doesn’t need us. Wired (8), no. (4) 2000.

Nissenbaum, Helen (2007) Computing and accountability. in J. Weckert, ed. Computer Ethics. Aldershot UK: Ashgate, pp. 273-80. Reprinted from Communications of the ACM 37(1994):37-40.

Taddeo, M. (2008) Modeling trust in artificial agents, a first step toward the analysis of e-trust. In Sixth European Conference of Computing and Philosophy, University for Science and Technology, Montpelier, France, 16-18 June.

Taddeo, M. (2009) Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction 5, 2, April-June 2009.

Weizenbaum, Joseph (1984). Computer Power and Human Reason:From Judgment to Calculation. New York: Penguin Books.

Wolf, M.J., Grodzinsky, F. and Miller, K. (2010) Artificial agents, cloud computing, and quantum computing: Applying Floridi’s Method of levels of abstraction. To appear in Luciano Floridi’s Philosophy of Technology: Critical Reflections, H. Demir, ed. Springer, forthcoming in 2011.

Listening as a tool for democracy in the age of Social Computing

AUTHOR
Krystyna Górniak-Kocikowska

ABSTRACT

The evolution of computer technology is amazing and breathtaking. Barely thirty years ago, computers were perceived mainly as ‘number crunchers;’ scholarly papers (Moor, 1985) were written to argue that these devices have a much broader potential. The development was so rapid that there was a problem with finding an adequate name for the new technology – from computer or digital technology to information technology to information and communication technology (Górniak-Kocikowska, 2005). These changing names reflected the direction in which the computer-based technology was evolving. The term social computing, used as one of the focal terms for the ETHICOMP 2011 conference, points out an additional step in this evolution. It indicates that presently the various applications of computer technology take the central stage in characterizing the technology itself; social computing being merely the most noticeable among them.

The recent popularity of social computing also brings a wide range of new problems, theoretical and practical alike. The social impact of social computing is possibly the most important among them. This paper will focus on one of the problems in the social impact area, namely, the problem of verbal communication, which is the core of social computing. Within the scope of verbal communication, the focus will be chiefly on the problem of listening.

In every meaningful and purposeful form of communication there are two main ‘players’ whether individual or collective: the sender and the recipient. (Often, they switch roles from sender(s) to recipient(s) and vice versa.) In verbal communication, the sender is usually known as a ‘speaker,’ whereas the recipient as a ‘listener’ even when the communication has a written, not an oral form. Usually, the speaker is seen as an active participant in the communication process, whereas the listener as a passive one. This distinction applies mostly to the external (visible and audible) characteristics of the communication process. In terms of internal characteristics, esp. regarding thought processes, the ‘listener’ can be as active as the speaker or even more so. This, however, rarely has a discernible impact on the process of communication at the time when this process is taking place.

Despite the existence of two processes (‘speaking’ and ‘listening’) and two participants (‘speaker’ and ‘listener’) in the phenomenon of verbal communication the interest of western scholars in ‘speaking’ far exceeds their interest in ‘listening.’ Corradi Fiumara maintains that this neglect of listening is the result of the dominance of logos and logical thinking in the western philosophical tradition. She further claims that logical thinking is “primary anchored to saying-without-listening.” (Corradi Fiumara, 1990, 3)

In the speaking-centered, not listening-centered western intellectual tradition, the primary purpose of communication is frequently the speaker’s victory and domination rather than mutual understanding and/or existential insight. In the logos-centered paradigm, the ‘speaker’s’ objective is usually to ‘prove,’ to ‘convince,’ to ‘make one understand,’ to ‘make one follow the speaker’ (the ‘speaker’s’ words, and sometimes also deeds). The ‘listener’ is supposed to pay attention, to remember, to follow the ‘speaker,’ and so on. Phrases like ‘listen to me’ more often than not mean ‘obey me.’ Besides establishing the position of the ‘listener’ as a subordinate one, such phrases indicate also that the role ascribed to the ‘listener’ in the communication process is a passive one. Consequently, fulfilling someone’s orders swiftly and accurately or acting by taking into account facts one has been informed about is often seen as proof of effective listening. But this is just one kind of listening, and it is not the most important one in the context of the social impact of social computing. Therefore, one of the issues raised in this paper will be the problem of ‘the will to listen’ (without which any meaningful communication is all but impossible). ‘The will to listen’ means that one has to have ‘the will to think’ first; ‘the will to obey’ or the ‘the will to follow one’s footsteps’ can ensue – or not.

One of the most prominent philosophers interested in the problem of listening, especially in the context of democracy and education, was John Dewey. Leonard J. Waks (2009) claims that the core of Dewey’s theory on listening is the distinction between “one-way or straight-line listening” (dominant in both traditional schools and undemocratic societies) and “transactional listening-in-conversation,” which “lies at the heart of democracy.”

Today, various academic disciplines, especially psychology, education, medicine, and marketing, pay significant amount of attention to the issue of listening. They all developed their own theories regarding this problem and approach it from their own particular perspectives. Even so, and even with the existence of professional organizations, e.g., The International Listening Association, and a multitude of on- and off-line publications, including specialized scholarly journals, there seems to still be an insufficient investigation of the problem of listening as an act of communication; in particular in the context of social computing which is now a global phenomenon. Global social computing can contribute to profound changes in the way the humankind deals with its own problems and with the problems of their environment. Therefore, advancing the understanding of listening and modifying our current approach to it should be one of our most urgent tasks.

REFERENCES

Corradi Fiumara, Gemma (1990), The Other Side of Language: A philosophy of listening, transl. by Charles Lambert, Routledge.

Górniak-Kocikowska, Krystyna (2005), “Problem z nazwaniem nowego globalnego spoleczenstwa” [Problems with the naming of the new global society], Osoba w Spoleczenstwie Informacyjnym, ETHOS, John Paul II Institute Catholic University of Lublin, John Paul II Foundation Rome, Vol. 69-70, 77-99.

International Listening Association (last accessed on January 29, 2011), http://www.listen.org/

Moor, James H. (1985). “What is computer ethics?” Metaphilosophy,16 (4), pp 226-275.

Waks, Leonard J. (2009), Hearing is a participation: John Dewey on listening, friendship and participation in democratic society, Manuscript.

Tweeting is a beautiful sound, but not in my backyard: Employer Rights and the ethical issues of a tweet free environment for business.

AUTHOR
Don Gotterbarn

ABSTRACT

The suburbs of the United States once welcomed Canada geese for providing a daily encounter with nature and as symbols of a protected environment. As their number increased so has their destruction of the environment; soil erosion from grass removal, pathogen carried by dropping, and aggressive behavior toward humans. Accordingly, the suburban attraction for these animals has changed to a desire to be rid of them or at least to significantly thin and control their ranks and diminish their negative effects on the environment. There are even websites dedicated to achieving these ends. (GoGeese.Com, GooseBusters, etc.). One of the problems is that these animals are protected by environmental law in the United States.

Effective communication is important to any business and business has encouraged computer communications until the quantity and kind of communication began to impact productivity. In response to this difficulty many corporations developed computer use policies. These policies were primarily focused on email and Internet usage while at work. These policies range from almost draconian restrictions which prohibit any email and Internet use in the work place to policies which encourage personal use of corporate computers during official breaks.

These policies were justified in a variety of ways including claims: that employees should not attend to personal tasks during working hours, not devoting salaried time to the company is a violation of your employment agreement, the organization owns the computers and what was on them and they should not be used for personal communications. Some of these claims have been upheld by court cases and have been used to justify inspection and restriction/censorship of employee email on the corporate machines. One problem with using laws to judge these standards is that technology moves faster than the law; there is a gap between the speed of technological development and changes in the law to help manage the technology. Law always lags behind and we still try to apply old doctrines to new technologies and social changes. Inappropriate employee communication was easily controlled both by employer computer use policy and mechanical restrictions on the computers.

Previous corporate computer use policies were about the use of computers at work and were based on at least two presumptions: the financial agreement between employer and employee and that the computers in question were corporate assets.

Improvements in technology (wireless communication, miniaturization, etc) and the change in our understanding of ways we communicate, generally referred to as ‘social media’, have caused many new and significant problems employers. These changes have contributed to blurring the lines between personal and corporate computer use. Our concepts are further muddied by employees bringing their own computers, in the form of smart phones and other devices, into the work place.

Both the technology and its usage patterns in social media require careful ethical evaluation. Among the problems are: a failure to see that the nature of the medium sometimes significantly distorts the messages, it is wrong to transmit from some locations, the equation of degree of repetition with truth, the failure to understand the impact of messages beyond its video screen representation, and career impacts of widespread digital information.

The technological changes have facilitated radical changes in the acceptable use patterns of technology outside of the work place. Individuals are now almost in continuous contact through social media.

For the individual the new standard of the social media raise some ethical tensions. You are valued by the number of tweets and followers of your every tweet. The new technology has increased your audience; instead of gossip being one on one conversation you can now gossip with a bull horn. Your worth is calculated in the number of ‘friends on your page’ and the more people who listen to you or the higher the number of hits you have the greater your currency. Your importance in social media is not determined by credentials, licenses, or experience but by popularity.

Oddly this generates a tension between your ‘value’ – tweet count or number of friends versus the ‘veracity’ of what is said. Problems with the accuracy and impact of tweets are beginning to be recognized. The new media requires and is developing standards to evaluate the content versus the number of times it has been repeated. There are web sites and standards developed by journalist to help substantiate the content of tweets. The Canadian Association of Journalists has tweeting guidelines. There are recommendations for what and how to re-tweet.

Many people now use Twitter’s 140 characters messaging without thinking how shortening the message may cause the loss of significant information as when the words “is indicated” were deleted from a re-tweet about the occurrence of a second Icelandic volcano. The instantaneous exponential repetition of this tweet added to its credibility and caused a panic.

Sometimes it is inappropriate to tweet from certain locations like a war zones. During the attacks in Mumbai, Twitter was so effective in providing up to date information that there was a concern that the tweeting would reveal critical information to the terrorists.

Unlike the effects of a single Canada goose a human twit can be re-tweeted exponentially increasing its impact, be it negative or positive. A significant repetition increases the credibility of a claim. The original April Fool’s day joke about President Obama’s birth location had the date removed thus significantly changing its information content and was re-circulated. It near infinite recirculation added so much to its credibility that significant numbers of people still believe that he was not born in the USA. No hard evidence like a birth certificate has been strong enough to sway their belief in the repeated message.

Digital dirt can derail an individual’s career. Ninety percent of search firm recruiters look online to find anything that helps draw a complete picture of a job candidate http://www.huffingtonpost.com/robyn-greenspan/dont-let-digital-dirt-der_b_780643.html . The digital information may indicate ethics violations, falsified qualifications, felony convictions and sexual harassment complaints, among other things. Due to the increased comfort with information-sharing and living openly online much of this information is posted by the job candidate. “The challenge to professionals is twofold: create a positive identity and suppress a negative one. In 2010, if there are no Internet references to your success and history of accomplishments, you don’t exist.” http://www.allword-news.co.uk/2010/11/09/dont-let-digital-dirt-derail-your-career/ . It is important to develop and protect your personal image brand.

Just as there is a gap between the speed of technological development and changes in the law to help manage the technology, employers are also in catch up mode with technology. Corporations, like the law lag behind technology and social changes.

Corporation’s computer use policies are inadequate and inappropriate for social media. The basis for managing working time now has to include reference to usage of personal computing devices in the work place, corporate brand and image protection, but this must be done without introducing any new ethical problems.

Some problems arise in part because when individuals use social media there is a blurring of the distinction between public information and private information and between work information and personal information. Notes on LinkedIn, MySpace and Facebook are a blend of private and public information. The major importance for individuals on Facebook is their Personal brand that must be maintained. People talk about what it is that they do but not who it is they work for. Sometimes the media will be used to attack particular employers. For instance there is a claim that Wal-Mart “bullies disabled greeters” on WarmartSucks.org. These attacks on employers can be intensive from multiple directions: web pages, Twitter, Facebook and MySpace accounts.

Business must be concerned with their public image, their brand. Social media is a powerful tool to promote a company, attract new customers and recruit the best talent. As digital dirt can derail a person’s career, it can destroy a company. Employers are being attacked on social media. In order to stay in business it is important for them to protect their brand from being tarnished by a pile of tweets and to keep their corporate image clean. Unfortunately if a brand is tarnished by a flock of tweets, no matter what evidence is provided it will be difficult to fix because social media has shifted trust away from institutions. It is hard to clear a company name.

Employer’s computer use policies need to address new problems generated by social media while minimizing ethical problems. They need to establish internal work related controls for social media to re-focus their employees as they attempted to refocus them with computer use policies. They need to address new issues the technology raises for industrial espionage from within the company. They also need to re-address the way in which employs represent the company and comment on the company on social media when not at work. One of the underlying problems with such policies is that when you place restrictions on the kind of things they can say you also make it difficult for them to be an asset to your brand. There is a need to balance the positive and negative effects of the policy.

Recent attempts to develop such policies have been problematic in a number of ways. Employers want to get the benefits of collaboration but users of social media don’t really draw the lines around the corporation. Corporate social policy needs to address this tension.

In 2010 a company, AMRC, fired an employee who had made untoward remarks about her manager on Facebook. AMRC had a policy which stated that “Employees are prohibited from making disparaging, discriminatory or defamatory comments when discussing the Company or the employee’s superiors, co-workers and/or competitors. “ In the USA, the National Labor Relations Board brought suit for the employee against AMRC to see if this policy violated employee right and violated free speech standards. The National Labor Relations Board was concerned with employee rights and wrongful termination.

In an attempt to address the employers rights and wrongful harm by their employees, some of these polices for social media are examined; identifying some of their ethical problems and making suggestions and providing strategies to reduce those problems.