Facebook: Providing a Service to Members or a Platform to Advertisers?

AUTHOR
Stephen Lilley, Ph.D., Frances S. Grodzinsky, Ph.D. and Andra Gumbus, Ed.D.

ABSTRACT

INTRODUCTION

Facebook has over 500 million members (22% of all Internet users) who spend over 500 billion minutes a month on the site each sharing roughly 70 pieces of content a month (Rosen, 2010). With these staggering numbers it is not surprising that Facebook has become a rich venue for advertisers. Critics charge that advertising revenue is generated by exploiting social actors’ proclivity to form and maintain social ties. Facebook counterclaims that we as a society are loosening our standards around privacy and becoming more transparent in our lives as we get used to sharing information about ourselves. Facebook sees itself as “utilitarians that give users the best technology for sharing and get out of their way” (The Economist 1/30/2010). Last year Facebook introduced additional privacy settings that allow users to opt out of sharing information with advertisers. Facebook’s ethical argument is more persuasive if we assume that users agree with transparency as a social goal, know about sharing their information with advertisers, accept this, or make use of privacy settings to restrict it. We conducted a survey of 349 Facebook users to determine to what extent the attitudes and practices of users are consistent or inconsistent with this ideal. This paper will report on our findings.

ADVERTISING VIA FACEBOOK

Fan pages maintained by corporations are attracting more consumers than corporate web sites, making social networking the leading platform for relationship marketing for many brands. (Neff, 2010). Using Facebook as an advertising platform will only increase. The company expects to bring in revenues of $1.4 billion in 2010 (similar to where Google was in 2003) (Stone, 2010). It is important to note that advertisers get this “earned media” benefit for free (Stone, 2010). Mining Facebook data can also help advertisers find potential customers using predictive targeting that allows advertisers to figure out, for example, that the audience for a certain song may also like their ice cream. Facebook added an ad tool called “learned targeting” in 2009 which allows ads to be pitched to friends of existing fans of their product. This is a marketer’s dream because it is based upon reality not intuition. “It’s all about getting, largely through stealth means, a consumer to endorse a product or a brand and to communicate that to their network of friends” (Stone, 2010).

THE EVOLVING NORM OF PRIVACY ON SOCIAL NETWORKING SITES: IS PROFITING FROM FRIENDSHIP ETHICAL?

The social networking business model is based on sharing members’ information with advertisers and marketers. Privacy activists are concerned that users’ rights are superseded by the drive to make a profit. There is a fine balance between alerting users to privacy while encouraging the sharing of data. Under criticism for a recent decision to make more profile data available to anyone on the Internet, Zuckerberg defended Facebook by positioning it as a reflection of a new social reality that accepts sharing and the shifting of the social norm away from privacy to a more open society. He describes the enhanced openness in human interactions as the greatest transformation force in our generation (The Economist 1/30/2010).

METHODOLOGY

We pose the following research questions: To what extent are Facebook members attentive to data sharing policies? Do they favor such policies and the social ideal of transparency? We conducted our survey in December, 2010 on the campus of a private university in the United States. Our purposive sampling of undergraduates yielded 372 students, of which 349 (94% of the sample) had a Facebook account. On average members had an account for four years and used Facebook six days a week, three hours per day. Respondents were asked to self-report on 1) the importance of Facebook to their social activities, 2) the level of their consumer activity on Facebook, 3) their knowledge of Facebook advertiser data sharing practices and their attitude toward such, 4) their use of sharing restrictions and the groups targeted, and 5) their assessment of transparency benefits, reputation risks, and consumer risks.

FINDINGS

In general, members were disinterested and inattentive to Facebook’s consumer/advertising features. Respondents were much more likely to indicate that Facebook is “very important” or “absolutely crucial” to sustaining friendships (56%) than it is for exposure to good things to buy (10%). Three Facebook provisions for sharing profile information with advertisers were described in the questionnaire and approximately one half of the sample admitted not knowing about them. Seventy-two percent of Facebook users have not controlled which of their information is made available to companies that host applications, games, etc., when friends use them. Of the twelve sharing restrictions listed on the questionnaire, respondents reported employing few restrictions, with twice as many utilized to target friends (average of 2.4 restrictions) than for advertisers/marketers (1.1 restrictions).

Most members, however, expressed opposition to Facebook data sharing policies. When prompted to provide their opinions about Facebook’s sharing their data with advertisers, approximately 50% answered “strongly oppose” or “somewhat oppose, “ 40% selected “don’t care,” and only 10% favored this. Asked if it is right or wrong that by selecting “Like” for an advertised product, play games, etc., your Friends’ profile information is made available to host companies, 60% answered “somewhat wrong” or “very wrong,” 26% selected “neutral,” and 14% chose “somewhat right,” or “very right.” Approximately 80% see a reputation risk in using Facebook, 65% see a consumer risk, and 60% believe that a potential transparency benefit is not worth exposure to these risks.

In the full paper, we provide a more detailed analysis of our findings and present our conclusions.

REFERENCES

Neff, Jack. What Happens When Facebook Trumps Your Brand Site? Advertising Age, Vol 81 Issue 30. 8/23/2010 pp 2- 22.

Rosen, Jeffrey. The End of Forgetting NYT July 25, 2010.p. 30 -45

Stone, Brad. Sell Your Friends: How Facebook Plans to leverage Its 550 million Use4rs Into the Greatest Advertising Juggernaut Since Google. Bloomberg Businessweek October 3, 2010.

The Economist. A World of Connections: A Special report on Social Networking January 30, 2010. P. 1- 20

Social Media, the Power of Inference and the Case for Digital Interdependence

AUTHOR
Dr Ann Light

ABSTRACT

As a world, we have never had such effective tools for showing cause and effect and the impact of activity in one place in terms of social, environmental or economic change elsewhere. This ability to trace actions and manage attribution is particular to the applications we have become able to build in the last few years. We can now combine the power of computation to infer from datasets with digital networks as a platform for generating global data.

Several new kinds of potential are relevant here. The first is social, characterized by Web2.0 tools. Anyone with the right access can now find and organize people with specified characteristics across time and space – be that globally or in our own locale – and engage in generating materials with them. The second kind involves combining information sources. This might mean using specialized sensors or tagged products to collect localized information which can then be assembled in global datasets. Searching and cross-referencing datasets to gather inferences is already providing insights into physical, social and economic challenges. For instance, inference is used to combat fraud by monitoring spending patterns; to join patients with suitable organ donors within a viable distance; and to throw up adverts targeting our particular characteristics and desires. Turned on the quantities of data that an actively data-gathering world could produce, our power of diagnosis would increase many times. The Personal Genome Project is an example of this form of public collaboration (http://www.personalgenomes.org/), while examples of mass data gathering also exist, for example, the search for extra-terrestrial life (http://www.seti.org/), a global project to assess water quality (http://www.worldwatermuseum.com/), etc.

The new capabilities bring with them new ethical challenges. We are being exhorted to live more responsibly with the fragile and diminishing resources of the Earth. If this is indeed a more ethical path (and this abstract will assume so) a number of new accountabilities come into being.

One type of accountability exists with designers. They have the potential to look beyond designing tools that appeal to individualistic values and incorporate more social models of interaction into the systems they propose. This course of action is supported by the increasing trendiness of sharing, reusing and loaning rather than owning artefacts (see, for instance, www.ecomodo.com). They can use a synthesis of the forms of networking described above to provide applications which show interconnectedness more clearly and offer a means to interpret and evaluate impact. Smart metres have had a mixed reception (eg Price 2011) but careful design of applications can make monitoring into a social good rather than something perceived as an intrusion, a labour or a cost. While it is disingenuous to suggest that we can now produce certainties about complex phenomena by collecting more information about them and mining for patterns in the data, such activities would produce more adequate diagnostic tools, which could impact on how we treat the health of both people and environment.

These tools would equip members of world society (Beck 2000) with the means to sample, test and report on their circumstances and what they see, or can trace or sense through electronic means. This is a very different use of the same functionality that data mining is enabling for major commercial interests (Bove and Mallett 2004), with its centralisation of information – and power. Thus, another challenge is to the individual consumer, to take an interest in the source of what is consumed, to acquaint themselves with the impact stories that are beginning to appear for products, to help in the logging of resources and the pooling of wisdom, to be an active member of society and to share in the reach of global research.

Last, a substantial challenge relates to governance and corporate social responsibility. The record of many countries and corporations as social and environmental custodians will inspire some cynicism. However, if we collect public data on the impact of political and economic choices and share the processed information with the people who have helped contribute to its gathering, there will be an implicit critique in the public domain of many existing policies and practices. There will be large numbers of invested people and a commensurate expectation that something happens based on the findings, ie that companies and governments act to improve their image/accountability. Whether motivated by ethics or expedience, any need for accountability would require change.

Given the conference theme of social computing, the paper will focus on the role of the application designer, developing the author’s earlier work on designing for interdependence (Light 2011). This recent work looked at characteristics of tools that would support world society data gathering and applying the political and economic pressure needed to act on the results, such as the need to design beyond any single ‘community’, to support negotiation across different groups and consider social justice and complex resource sharing issues (Light 2011).

The ethical argument for greater consideration of interconnectedness is not new… it has been repeatedly made (Buckmaster Fuller 1969, Papernak 1985, etc.). But it is timely to reconsider it because of the potential that networks offer and to look at ethical factors that arise from the greater potential to evaluate what responsible behaviour with resources might involve.

REFERENCES

Beck, U. What is Globalization? Polity Press, Cambridge, UK, 2000

Bove Jr, V. M. and Mallett, J. Collaborative knowledge building by smart sensors, BT Technology Journal 22 (4) 2004

Buckmaster Fuller, R. Operating manual for spaceship earth, Southern Illinois University Press, IL (1969)

Fogg, B.J. A Behavior Model for Persuasive Design, Persuasive’09, 2009

Light, A. Digital Interdependence and How We Design for it, Interactions, Mar/Apr (2011)

Papanek, V.J. Design for the real world: human ecology and social change (2nd edn), Academy, Chicago (1985)

Price, A. Smart Meters, Dumb Backlash, Good Technology, January 15, 2011: http://www.good.is/

Navigating the Ethical Minefield of Assessment in Higher Education

AUTHOR
Mike Leigh and Lucy Mathers

ABSTRACT

Context:

The ethical dimension in the assessment of student performance has been recognised for some time. Rowntree (1977), for example, states that effective assessment must be valid, reliable, feasible and fair and that the interrelationships between these factors must be understood. In this context the concept of fairness is taken to be that every student has an equal chance of getting a good result and that extraneous considerations cannot influence the final result. In order to ensure fairness, the design and marking of an assessment should avoid any bias from such sources as gender, ethnicity and disability. Within Higher Education (HE) this is of particular importance as lecturers have the dual role of being both teacher and arbiter (Macfarlane, 2004) in that the majority of assessment in HE does not involve public examinations. Much of the published work on assessment design and management has been focused strongly on pedagogy. For instance, Biggs (2007) considers the constructive alignment of learning with assessment and its effects on student motivation; whereas MacDonald (2008) discusses the role of assessment activities within a blended learning environment. Where the importance of fair assessment has been recognized the ethical issues tend to be narrowly scoped and focused on specific concerns. For example, Fleming (1999) discusses bias in marking students’ written work; Wakeford (2003) looks at making assessment decisions more defensible in the face of student appeals; while others consider the role of the Internet on plagiarism in assessment (Thompson and Stobart 2002; and Hinman, 2008).

This study, however, firstly identifies a wide range of ethical aspects that may arise during the design and management of assessment activities. Secondly, it discusses the results of an investigation into the levels of awareness and the attitudes of both staff and students to the ethical dimensions of assessment that have been identified. Finally, it incorporates the analysis from the investigation to provide guidance to staff and students to promote ethical actions in assessment activities.

Research Design:

A literature survey was undertaken which, together with the results pertaining to online assessment from previous investigations (Leigh, 2006; Mathers and Hay, 2010) were used to ascertain a wide range of ethical concerns pertaining to a broad set of assessment types in Higher Education. A questionnaire was used as an efficient method of ascertaining the views of a large number of students on the identified issues. Focus groups (Bryman, 2008) were used to further explore selected students’ awareness of the ethical issues identified from the questionnaire and their attitude towards them. Participants of these groups were chosen whose profile represented a cross-section of the divergent backgrounds of the university students including gender, age, ethnicity and particular learning requirements. A series of semi-structured interviews (Denscombe, 2007) were carried out in order to explore staff awareness and attitudes towards the selected ethical concerns. The profile of these participants was chosen to capture views of staff, with differing levels of experience, who have administered a broad range of assessment tools at various levels of study. Also included in the staff interviewed were the Faculty Disabilities Support Coordinator and the Academic Practices Officer responsible for dealing with plagiarism cases.

Preliminary Findings:

At the time of preparing this abstract, the focus groups and semi-structured interviews are not complete; however from the work completed to date some comments can be made on the findings. For example, it is apparent that the students participating in this study have some interesting gaps in their awareness of ethical issues pertaining to assessment activities. Although, they recognise some unethical practices such as plagiarising source materials and cheating in examinations many are unaware of other issues such as equality of access to assessment mechanisms, unless they are directly affected themselves. Similar attitudes and levels of awareness were seen around the issues of student behaviour when undertaking group assessment activities. However, in these cases students tended to have stronger opinions when they were affected by an issue.

It is also clear from the work to date that there is a close relationship between pedagogical and ethical issues in the successful implementation of assessment activities. This paper will identify those relationships and explore their implications for both staff and students. It will also provide guidance on dealing with ethical dilemmas that may be encountered in assessment design and management.

REFERENCES

Biggs, J. B. (2007) Teaching for Quality Learning at University 3rd Ed. (Maidenhead, Open University Press).

Bryman, A. 2008. Social Research Methods. 3rd ed. Oxford University Press.

Denscombe, M. (2007) The Good Research Guide: for small-scale social research projects 3rd ed. (Maidenhead, Open University Press).

Fleming, N. D. (1999) Bias in Marking Students’ Written Work: Quality?, in Brown, S. and Glasner, A. Assessment Matters in Higher Education: Choosing and Using Diverse Approaches, (Maidenhead, Open University Press).

Hinman, L. M. (2008) Rethinking Plagiarism in a Virtual World, Proceedings of the tenth international ETHICOMP conference, Mantua, Italy, September 2008.

Leigh, M. (2010) “’Am I Bothered?’: Student Attitudes to some Ethical Implications of the use of Virtual Learning Environments”, ETHICOMP 2010, Rovira i Virgili University, Taragona, Spain 14-66 April 2010.

MacDonald, J. (2008) Blended Learning and Online Tutoring: planning, Learner Support and Activity Design, (Aldershot, Gower Publishing).

Macfarlane, B. (2004) Teaching with Integrity: the ethics of higher education practice (London, Routledge Farmer).

Mathers, L. and M. Hay (2010) Developing autonomous learning in students: creating and communicating an appropriate assessment strategy. New and aspiring programme leaders workshop series, De Montfort University, 24 November 2010.

Rowntree, D. (1977) Assessing students: how shall we know them? (London, Routledge Farmer).

Thompson, J. B. and Stobart, S. C. (2002) University Research, Plagiarism and the Internet: Problems and Possible Solutions, Proceedings of the sixth international ETHICOMP conference, Lisbon, Portugal, November 2002

Wakeford, R (2003) Principals of student assessment, in Fry, H., Ketteridge, S. & Marshall, S. A Handbook for Teaching & Learning in Higher Education: enhancing academic practice (London, Routledge Falmer)

Open source produced encyclopedias: Towards a broader view of expertise

AUTHOR
Paul B. de Laat

ABSTRACT

Open source as a method of production has spread from software to other types of content like reference works. In previous work (De Laat 2011) 6 online encyclopedias of the kind were analyzed. The strongest clash of opinions concerned the role of experts and expertise in creating an encyclopedia. On one end of the scale we find Encyclopedia of Earth and Scholarpedia that emphasize the crucial role of expertise; as a result, the creation and moderation of evolving articles is largely reserved to experts. On the other end we find Wikipedia that de-emphasizes the role of experts; the editing process is open to anybody without distinctions.

Wikipedia is currently more successful than all other encyclopedias of the kind: the total number of entries is in the millions (for the English version alone). More importantly, the quality of Wikipedian entries is astonishingly high. That at least would seem to be indicated by several investigations that compared Wikipedia to classic encyclopedias (Britannica, Encarta and Brockhaus) concerning the same selection of scientific subjects. Experts rated articles from these sources as being of about equal—though varying—quality. Wikipedia passed a kind of Turing test, while experts were unable to reliably distinguish between Wikipedia and other encyclopedias.

How is this astonishing finding to be explained? One possible explanation is to argue that although amateurs predominate, some proper experts have come along and made all the difference. This sounds rather unlikely while reportedly very few experts participate at all. Another explanation argues that some of the laymen involved have become proper experts themselves while working on the entries involved. This would also seem to be far-fetched as most expertises require considerable investments in time and energy for their acquisition. Therefore I want to explore a third kind of explanation: the conjecture that amateurs, by continuous discussion in wiki spaces, may acquire enough capabilities to produce reference articles of high quality. That is, they do not become true experts that are able to actually do the science involved—they only become ‘conversational’ partners of the experts involved. That is enough, however, for Wikipedian entries to pass the Turing test for quality as described above.

Cognoscenti recognize, of course, the very concept of ‘interactional expertise’ as coined by Collins and Evans (2007). They argue that between ubiquitous expertises (like popular understanding of science) and the specialist expertise that is capable of actually doing the science involved (‘contributory expertise’) another specialist expertise can be identified: interactional expertise. Its possessors can engage in intelligent conversation concerning the domain involved—without for that matter being able to contribute to it. Linguistic competences are developed, not practical ones. This communication medium is the province of research journalists, scientists involved in peer review, and the like.

To what extent is this a plausible conjecture for the success of Wikipedia? An argument pro is that wiki software links a discussion page to each textual entry. Sometimes lengthy discussions take place on them. This is a clear indication of linguistic exchanges occurring that may contribute to developing interactional capabilities. On the other hand, nasty edit wars may erupt between competing factions; as a result, learning processes will suffer (cf. Sanger 2009). Also, real experts may just stay away from an entry, ruling out most of the needed linguistic interactions. Finally some expertises may just be too hard to become acquainted with.

This conjecture can usefully be connected with discussions about quality of Wikipedian articles (for more details see de Laat 2011). On the one hand we observe fierce debate inside Wikipedia about how to uphold quality in view of so-called vandalism. One démarche considered is a system of review: each and every edit is to be scrutinized for vandalism before insertion in the public version of the entry involved. Which criteria are to be used in designating reviewers? In the German Wikipedia (review system in operation for 2 years now) registered users are considered fit for the surveying job once active for 60 days and having performed at least 300 edits. So high edit count is the main criterion. I will argue that it is not so much intended to indicate a kind of expertise (e.g., editing expertise) as loyalty and dedication to the Wikipedian enterprise. Moral—not epistemological—qualities are gauged.

On the other hand, a burgeoning research stream in computer science is also targeting the quality problem. The leading approach is to construct computational metrics that purportedly measure credibility of entries. A promising method is based on their revision histories (as available on Wikipedian servers) and focuses on the survival of individual edits over time. Each round of editing is seen as casting a vote upon edits in sight. The more often edits remain intact, the more both credibility of the text and reputation of the author as capable contributor rise (and vice versa). So the measure of author productivity suggested here is edit longevity. Possibly it will be used in future for appointing reviewers that judge quality proper. I will argue that such author reputation—targeting the epistemic qualities of authors—indicates precisely the Collins-and-Evans mid-category of interactional experts. We cannot disentangle, of course, whether we are dealing with interactional or possibly contributory experts—they will exhibit, so to speak, the same linguistic behaviour (cf. transitivity of expertises).

If this interactional view on Wikipedian policy is correct, it would reflect on the editorial policies used by some of the other online encyclopedias. Precisely the egalitarian approach seems suitable for nurturing competences. This is neglected with Scholarpedia and Encyclopedia of Earth: they only admit recognized (contributory) experts. As a result, interactional experts—whether nascent or accomplished—and their possible contributions are simply excluded. With Citizendium prospects are better: anyone is admitted. But it all depends on the leadership style of the ‘moderating’ (contributory) expert whether a suitable learning process comes to fruition or is nipped in the bud.

REFERENCES

H. Collins and R. Evans. 2007. Rethinking expertise. Chicago and London: The University of Chicago Press.

P.B. de Laat. 2011. Open source production of encyclopedias: Editorial policies at the intersection of organizational and epistemological trust. Social Epistemology (under consideration).

L.M. Sanger. 2009. The fate of expertise after Wikipedia. Episteme, 6(1): 52-73.

Governance challenges and technology innovation for social use*

AUTHOR
Aygen Kurt and Penny Duquenoy

ABSTRACT

This paper takes as its theme the consideration of ethics in technology development and takes as its foundation work undertaken in the European Union?s (EU) Science in Society EGAIS Project (Framework 7) in which the authors are partners. The Ethical Governance of Emerging Technologies (EGAIS) project investigates the ethics governance processes for European co-funded research and development projects. Our position in this paper will be to discuss some of the results from the project and then consider the implications for non-government funded developments, such as innovations that occur independently or within small organisations. We will base our argument on the innovation theory (and discourse at European level) as a means to understand the development picture, and draw out similarities and differences between the more regulated domains of funded-research and the increasingly common innovative developments that occur outside any regulatory framework.

This abstract begins with a short overview of the area, a flavour of some of the results from the EGAIS project, and progresses to examples of contemporary cultures in development. We raise some questions that are pertinent to ethical responsibility and new technologies.

Innovation has been at the heart of European funded research and policy discourse more explicitly since the beginning of Framework Programmes in the 1980s. Today it continues to be vital and more importantly the ethical aspects of technological innovation and “responsible innovation” are becoming essential elements of the funding mechanisms. The European Commission (EC) has recently launched a consultation on the future of research and innovation funding in Europe with a focus on increasing research funding into innovation and spurring its economic impact on society (European Commission, 2011). To pave the way for this approach, late last year the EC published a communication paper about the launch of a European Innovation Union setting the rationale and targets to improve Europe?s ability to drive innovation in products, services and processes to tackle major challenges facing society today (European Commission, 2010).

In this picture, embedding ethical thinking into technology development culture of Europe and raising awareness for ethical implications of new technologies and innovations is a difficult task. On the one side one has the innovation-driven techno-economic paradigm is embedded in the EU?s research policy discourse and on the other hand, integrating ethical thinking into the innovation design processes might hinder the pace of innovation and competition. However, within the boundaries of an emerging European knowledge system (Stein 2004), enhancement of innovation and rise of competitiveness together with social, ethical, legal and cultural implications can go hand in hand as long as they are sourced, produced and consumed within the system.

In the EGAIS project, we have coordinated the analysis of a number of EU-funded technology development projects to understand whether the ethical and social aspects of the technologies being produced were recognised, and if so, how the project partners resolved the issues, and what the governance arrangements were. In this project we were looking for instances of projects taking a perspective broad enough to recognise social and ethical impacts, to move beyond the technological point of view, to give thought to ethical principles (not just law, e.g. in the case of privacy) and the use of processes that allow these considerations to emerge, to be discussed, and resolutions implemented. Few projects demonstrated these characteristics. (The submitted paper will give more detail).

As mentioned earlier, the projects we studied are all examples of funded-research which are in one way or another contributing to and utilizing of the accumulated knowledge of the European Research Area, which is defined by the EU?s own policy discourse. However, if we look at other areas of innovation which are not funded within the EU-sourced mechanisms, but still having an impact on the European societal lives, the key challenge appears to be how ethically embedded technology development culture will inform our practices and how ethical norms in this context would be defined. Social media and open source systems embedded in a social media context are good examples which do not fit in the boundaries of the funded research idea.

Linus Torvalds, founder of Linux, for instance refers to the three key motivations that drive progress (in relation to technology). These are survival, social ties and entertainment (in Himanen 2001). However, when it comes to designing open source software, the programmer?s key and only motivation would be entertainment. Torvald?s notion of entertainment involves the desire to know, curiosity, innovation and enjoyment of the challenge. If this challenge and enjoyment do not exist, and the technology is no longer an entertainment, then something like Linux would not have come about.

In this respect, Himanen (2001) suggests that the open source innovator?s, namely the hacker’s (in his words) work ethic would be guided by various values, that are passion, freedom, social worth, openness, activity and caring, with a focus on „concern for others as an end in itself and a desire to rid the network society of the survival mentality that so easily results from its logic? and creativity.

In such a “beyond the borders of the system” approach, the questions we need to ask would be:

  • Who are we expecting ethical responsibility from in technology development?
  • How can the technologist / innovator/ entrepreneur be capacitated in a competitive and supply-driven knowledge economy?
  • How can technology regimes be created (and what should they be composed of) for ethically responsible technologies to flourish?
  • What could be embedded within the research, innovation and knowledge systems for society to cope with the ethical aspects of technologies produced outside of the system?

*The research leading to these results has received funding from the European Union’s Seventh Framework Programme FP7/2007-2013 under grant agreement n° SIS8-CT-2009-230291

REFERENCES

European Commission, (2010). Communication From the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Europe 2020 Flagship Initiative: Innovation Union. Brussels, 6.10.2010. COM(2010) 546 final.

__________________, (2011). Green Paper: From challenges to opportunities: towards a common strategic framework for EU research and innovation funding. Brussels, 9.2.2011. COM(2011) 48.

Himanen, P. (2001). The Hacker Ethic. Random House: New York

Stein, J. A. (2001) ,Is there a European Knowledge System?, in S. Borrás (guest ed), special issue of Science and Public Policy on Towards a European System of Innovation? Vol. 31 No. 6, pp 435-447.

Social Engineering: the Psychological Attack on Information Security

AUTHOR
Subrahmaniam Krishnan-Harihara, Vasanthi Nagappan and Prof Andrew Basden

ABSTRACT

It is widely recognized that technical security by itself does not offer sufficient protection against security breaches. Of all the different kinds of security breaches, technical security is perhaps least effective against social engineering because this form of attack depends on the manipulation of people. Social engineering is the human approach to violating security. It involves obtaining sensitive information or access rights to assets through deception or impersonation. It is also probably the most difficult type of breach to deal with. Although social engineering may not be as widely known as other security breaches, it can have very serious consequences for an organization. Social engineering takes advantage of human error or carelessness, and even the genuine human desire to be helpful and trusting. It is a popular form of attack because there are no technical barriers to overcome and also because it may result in the attacker obtaining valuable information which can then be used for other breaches. In many cases, the victim of social engineering doesn’t realize that he/she has been manipulated.

While each social engineering attack is unique, the commonality is the pattern. Most social engineering attacks take the four step process of information gathering, developing a relationship, exploitation and execution (Mitnick & Simon 2002). In the first step, several techniques may be used by the social engineer to gain knowledge about the intended target. This may include information such as phone numbers, birthdates, designation or the company’s organizational chart from public sources such as phone books, web pages etc. This is followed by building a rapport with the victim, which over time could develop into a relationship of mutual trust and friendship. Developing a trust is crucial because it could facilitate the exchange of favours or, the attacker can abuse the trust for the purpose of carrying out a breach. Actually obtaining the sensitive information or access required is the third step in the process and when that has happened, the attacker can carry out the final step of actually perpetrating the attack. In some cases, it might also iterate into further cycles or the actual attack may have several cycles.

The underlying process of social engineering is, therefore, thoroughly psychological because it is mainly about creating illusory pretexts, wherein the victim believes that the context upon which the attacker calls him/her is genuine and that he/she genuinely requires the information that can be furnished only by the victim. The genuineness of the entire situation is established by the attacker through a display of confidence, authority, the right credentials and good communication. By empowering the victim with praise and by presenting oneself as a trustworthy and legitimate person, the attacker can then proceed to gain all the information required in a confident manner.

Social engineering is a psychological process by which an individual can gain information from another (target) individual. In a social attack, the attacker often uses mental imagery and cues over direct, logical arguments to trigger the target into revealing the required information or performing the required activity. Because of the intense mental process through which this is done, the target individual often feels compelled to comply with the attacker. Success for the attacker depends on making this feeling strong enough so that the intended victim is persuaded to forego established procedures. A social engineer preys on certain qualities of human nature, all of which have a psychological basis. These qualities are the desire to be helpful, the tendency to help people, the fear of getting into trouble, the willingness to cut corners, the fear of job loss or personal embarrassment and the desire for prestige, thereby securing information release (Turner 2005, Peltier 2006).

Social engineering, as a security attack, needs to be given adequate attention because of its ability to take advantage of human weakness of trust and helpfulness. A successful social engineering attack can lead to other serious offences such as identity theft and industrial espionage. This is not only at the organizational level, but also at the individual level. This paper will aim to study this human element of security because this is an area most prone to attacks, as opposed to the technical means of providing security. It is evident that to understand social engineering, it is important that the psychological process be studied. This paper will, therefore, attempt to explore the psychological element of social engineering. In doing so, this paper will seek to identify the causes of social engineering and what could be done by organisations to counter it. A qualitative analysis of data collected for the study will be presented to evaluate the level of awareness about social engineering. Finally, recommendations for building awareness about social engineering will be provided based on a review of current psychological research into the subject. The authors expect that this paper will be valuable to information security professionals seeking to build effective security programmes covering both technical and non-technical aspects of information security.

REFERENCES

Mitnick K D and Simon W L (2002) ‘The Art of Deception’ Wiley Publishing, Indianapolis, Indiana, USA

Peltier T R (2006) ‘Social Engineering: Concepts and Solutions’ Information Security and Risk Management. EDPACS 33(8), p 1-13

Turner T (2005) ‘Social Engineering: Can Organizations Win the Battle?’ available online http://www.infosecwriters.com/text_resources/pdf/Social_Engineering_Can_Organizations_Win.pdf [retrieved: 04 February, 2011]