Researching Social News – A novel forum for public discourse and information sharing.

AUTHOR
Richard Mills

ABSTRACT

Social News websites deploy voting systems whereby their community of users rank and sort large volumes of content collectively without explicit organisation or editorial control. The primary focus of this paper is the website reddit.com. On reddit, individual users can submit items of content (either links to web resources or text posts) and also vote (up or down) on the items submitted by other users. This voting system is also applied to comments on Posts. On any given day tens of thousands of users participate in content submission and voting; with the aggregate of this activity producing a ‘front page’ list of the 25 items which are seen by a large number of visitors to the website. Through the voting interface large numbers of users make a collective decision about which posts are displayed on the front page. Furthermore, each of these posts is accompanied by a democratically mediated discussion which is contributed to by thousands of individuals.

This paper will present research on reddit.com conducted using a variety of methods. As with any Social Media site reddit’s web servers contain precise and accurate records of all activity taking place on the website. Reddit’s administrators have provided one month of voting data from their servers for the purposes of research (henceforth ‘back-end’ data). This back-end data allows for questions relating to patterns of behaviour on the website to be addressed. However, back-end data has been anonymised with respect to post identity; making it unsuitable if we wish to consider qualities of the content being ranked. Reddit (again in common with many Social Media applications) provides an Application Programming Interface (API) which can be used to extract and store data. This paper will present details of the data extraction process, and offer examples of research questions which are more suited to back-end and front-end data respectively.

The interfaces deployed by Social News websites bear some similarity to those of Social Bookmarking websites. The major dissimilarity between Social News and Social Bookmarking is the degree to which Social News websites focus attention on the website’s front page; and absence of tagging on Social News sites.

This paper will present the following analyses related to back-end and front-end data obtained directly from the website.

  • The hypothesis that Social News websites focus attention on their front page is supported by a Power Law (with exponential cut-off) distribution of votes between posts. This is seen as key to allowing a sense of website identity to emerge among its large user-base.
  • The lifespan of a post is analysed in great detail, with results suggesting that early voting (within the first hour) is a strong predictor of the post’s ultimate success of failure.
  • The relative attention received (in terms of votes and also hits) by posts in different areas of the website is analysed; suggesting that the front page is indeed where most of reddit users’ attention is focused.
  • Patterns of User behaviour are studied; and it is revealed that people use their functionally identical accounts to participate in diverse ways.
  • Several strategies are also identified whereby users seek to maximise the impact of their votes in determining what the front page will look like.

Content analysis has also been employed in an effort to understand what the broader significance of reddit is with respect to public discourse on the Internet. The background to this is the literature about public discourse on the Internet (e.g. Benkler, 2006; Negroponte, 1995, Sunstein, 2002; Noam, 2003). Case studies are being conducted on reddit’s coverage of the Wikileaks diplomatic cables (November/December 2010) and the current Egyptian protests. The primary aim of these case studies is firstly to determine how coverage of the stories on reddit differs from that of conventional news outlets. Preliminary results suggest that many posts on reddit link to conventional news outlets, with these articles being discussed through the commenting system. There are however also posts which speak to aspects of the story not covered by conventional news outlets; and text posts by users which espouse particular points of view; ask questions of the community; or which call on reddit users/visitors to engage in some form of collective action related to the ongoing story. A large proportion of posts about these subjects address the conventional news’ coverage of the story directly; often these are highly critical. This suggests that Social News websites may be of particular importance to issues surrounding the convergence of old and new media (Jenkins, 2006).

The method employed in these case studies involves looking at the profile of submissions on the topic, and comparing this to the voting reception each submission receives. A secondary aim of these case studies is to establish whether reddit’s voting interface is capable of expressing conflicting points of view; for example if I see a pro-wikileaks post on the front page does this mean I am unlikely to also see an anti-wikileaks post appearing here? What proportion of posts about the subject are for or against the topic, and is this proportion reflected in the number of posts of each persuasion that reach the front page? These questions relate to issues of fragmentation and polarisation of public discourse on the Internet (e.g. Negroponte, 1995).

Finally, this paper will present the results of an ongoing experiment which looks at how well reddit’s voting interface performs on the task of sorting large volumes of content such that the ‘best’ content (as judged by reddit users) has the highest rank and will be seen by the most people. This experiment also looks at whether an item’s voting response on reddit effects reddit users’ perception of the content; and if so how quickly this relationship is learned/un-learned.

REFERENCES

Y. Benkler. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, New Haven, 2006.

The Useful Relationship between Ethics and Systems Quality

AUTHOR
Craig McDonald

ABSTRACT

This paper examines the relationship between the concepts of ethics and of quality in the context of computer-based information systems. It argues that the relationship between the concepts is useful in that quality provides a concrete, accepted means for embedding ethics in professional practice and ethics provides a theoretical underpinning for quality and a means for its critique. The paper proceeds by briefly reviewing the concept of ethics and more extensively examines quality. It then teases out the relationship between the concepts and gives examples of the utility of that relationship in professional practice.

QUALITY: The term quality is used to express an assessment of the overall value of an object or activity. One account of quality considers the subjective experience people have of an object or activity. This experience might be might be described in answers to questions like “how do you rate the quality of this wine?”. Answers to such questions are, however, more descriptive of the state of the answerer than of the wine. An alternate account of quality is said to be objective. An expert, for example, might justify their assessment of a wine by referring to its particular properties – its colour, acid and balance – describing them as only wine experts can. But a description of properties is not a quality assessment. Quality is not a measure of some property of an object in the sense that 14% is a measure of alcohol content or slightly cloudy is a measure of wine clarity. Quality cannot be found by inspection. Quality, as an objective matter, is a construct which has an agreed meaning for a community and which allows the values of natural properties to be combined to yield an assessment of quality. The community decides what properties will be considered, what types of measurement will be made, what values will be placed on them and how they will be combined to produce an objective evaluation.

The above two accounts of quality distinguish between an objective, technical, community sense of the word and its subjective, experiential, individual sense. Quality is a strong, value-laden word; it is not always easy to separate the accounts in practice. Debate about the quality of a particular wine for example can easily shift back and forward between the two senses, confounding both.

Quality Criteria: If we accept that quality is not inherent in an activity or artefact but is a concept to be derived from attributes that are inherent, then we need to consider and specify what it is that will count as quality in any particular artefact or activity. Rather than considering each individual situation from scratch there are some general, overlapping quality ideas that can be brought to bear including context awareness, evidence-based action, fitness for use, stakeholder impacts and traceability (for accountability). Notice that many criteria relate to the impact of an artefact or activity of those it affects.

ETHICS: There seems to be three main kinds of discussion about ethics and ICT. The first tackles particular issues, like workplace surveillance or copying software, and examines the ethical principles that might apply to them. Laws are framed from this kind of discussion so it is critically important. The second looks at specific events that reveal unusual ethical aspects and dilemmas. The final kind of ethics and ICT discussion starts from first principles and sees issues and events as applications of ethical principle. This approach has given us a valuable practical tool for ethical evaluation – stakeholder analysis (Pouloudi 2000, Bowern et.al. 2004)

RELATIONSHIP BETWEEN QUALITY AND ETHICS: So, it seems that the affects of our systems on our stakeholders is a core idea behind both Ethics and Quality. Systems developers are at the sharp end of these issues because their work has significant and wide-spread impacts on others. In quality assuring systems are we not applying ethics?

Professional Ethics: Bittner & Hornecker (2002) argue that to have responsibility for an action (or not taking an action) in some situation, a person needs to have an element of voluntariness, autonomy, foresight and there needs to be a causal influence between the action and the effect. But complex organisations & large systems diffuse and disguise responsibility. Perhaps quality assurance is a means to address the complexity issue.

A USEFUL RELATIONSHIP: The reason the relationship between ethics and quality is described as useful is twofold. Firstly it has been found that ethics, as a way of thinking, is poorly embedded in professional computing practice and in education for professional practice (Lucas & Mason, 2008). So quality provides a means for the practical expression of ethics in systems development and its embedding in systems artifacts in a way that is meaningful to practitioners, measurable in practice and tangible in discussion.

The second useful aspect of the relationship is that ethics provides a rationale and a justification for quality, its specification and its assurance.

EXAMPLES OF THE RELATIONSHIP: The interaction of quality and ethics will be elaborated in an example from Software Engineering and another from Higher Education.

Denial Of Service : Hacktivism and cyber extortion against the establishment.

AUTHOR
Roger William Masters

ABSTRACT

Denial of Service ( DOS attacks) are described by Yar (2006) as attacks on a network com-puter or computers that disrupt normal operations to such an extent that legitimate users can no longer access their services. Whilst there have been many technical papers written by computer scientists in order to try and combat DOS attacks, there have been few crimi-nological studies on this subject. This study sets out to detail the history of DOS attacks around the world. It provides an examination of the underlying reasons that such attacks have been carried out and examines the extent to which the forces of law and order have been successful in containing the threats and prosecuting the offenders.

In the UK, Internet usage has grown over the last 10 years at an extraordinary rate to the point where in 2010 60% of the UK adult population accessed the Internet every day or al-most every day. This is nearly double the estimate of 16.5 million in 2006. Partly fuelled by the relatively low start up costs of e – business web sites, and massive investment by the banking industry in secure e- payment facilities, the amount of trade and business conduct-ed electronically has increased in line with this growth in Internet usage by the population at large. The number of adults in the UK who bought or ordered goods or services online within the last 12 months reached 31 million in 2010 (Office for National statistics 2010). Similar growth rates are being seen in many developed countries around the world. Busi-nesses are becoming increasingly reliant on the internet as a source of turnover and income, and, as a result, continuous internet availability is now an integral criterion for business success

Outside the world of commerce, governments, the forces of law and order, NGOs, higher educational institutions and many other non commercial organisations are also becoming increasingly reliant on the availability of the Internet. Any disruption to internet services whereby legitimate users are unable to access an organisation’s web site is therefore poten-tially disruptive, costly and could have significant ramifications both economically and repu-tationally.

Denial of service (DOS) or Distributed Denial of Service (DDOS) attacks set out to achieve just this. DOS attacks on the private and public sectors are now common, and attacks have been used for a variety of criminal purposes. DOS attacks are not new, but remain difficult to combat. The first recorded large scale attack was in August 1999 on a network used by students and staff at the University of Minnesota, resulting in the network being shut down for more than two days. In 2000, sites at Yahoo, Buy.com, CNN, Amazon and EBay were all affected by these attacks. In 2001 and 2002 Microsoft’s servers were disabled. A Russian crime gang used a DOS attack in an extortion attempt on UK gambling websites during the 2004 Grand National. In recent times, DOS attacks have been used to show ‘political’ sup-port for WikiLeaks and have been aimed at Master Card, Visa, Post Finance (a Swiss bank), Sarah Palin’s web site, and the Swedish Government. Other unrelated attacks have been launched against institutions as diverse as The Church of Scientology, and the web site of the U.K. Intellectual Property Office. In some cases, companies have allegedly paid the criminals a ransom to reveal how they brought the systems down, because from the corporate perspective it can cost less to pay the criminals than suffer the consequences of a DOS at-tack. Many companies have paid ransoms, and many have apparently suffered DOS attacks but not formally reported them to the authorities for fear of adverse publicity.

Those who seek to attack computerised systems in this way can be driven by a wide range of motives from a belief in freedom of access to information for all, to those who wish to perpetrate acts of vandalism, incapacitation, espionage or terrorism. Many of the recent high profile attacks have been in support of political and ideological goals. Government websites in both Tunisia and Egypt have been recently targeted by DOS attacks emanating from ‘hacktivists’ apparently in retaliation for censorship in both countries.

To date, relatively few arrests have been made around the world, and these have occurred primarily only in response to high profile cases. A 15 year old was amongst those recently arrested in the U.K. in connection with the WikiLeaks case, and the group claiming responsi-bility, Anonymous, is quoted as saying ‘You can easily arrest individuals, but you cannot ar-rest an ideology’.

Identifying, tracking down, arresting and successfully prosecuting those involved demands international police cooperation and a legal framework that facilitates successful conviction. In many cases it would seem that such conditions are far from being achieved. This paper sets out firstly to review the nature of the crimes committed and examines the impact of such attacks on organisations as well as the dynamics of both victims and offenders. It then focuses on the regulatory mechanisms developed to prevent such attacks and considers how the globalised international nature of the Internet has assisted or impaired such en-forcement. It concludes with an assessment of the effectiveness of the enforcement and criminal law systems in combating such attacks, both nationally and internationally.

Does social computing help professional social networks ? An experiment on web 2.0 to promote quality of working life.

AUTHOR
Emmanuel Martin

ABSTRACT

There can be many positive effects of social computing, and the use of so-called ‘social media’. For example, it can be used beyond socialising to seek advice and professional development as well as offering new business uses. It creates a collective intelligence across society through interactive collaboration across fast communication networks.” (Ethicomp 2011 Call for papers)

Beyond ”seeking new business uses”, social media is more and more used by companies to build professional communities. Connecting the employees to online social networks is supposed to help them share more information, more quickly and effectively, to tackle business issues. Members of these online communities can have the same job or, on the contrary, different jobs but a common professional concern. In this case, creating a social media (to help the employees interact) tends to mingle with interesting them to a common concern. Then what is the exact role of the media : a sheer instrument, or an embodiment of social relations ? Based on the account of an ongoing web 2.0 project inside a large energy company, this paper discusses the “positive effects of social computing” on two levels : does a web 2.0 device provide any benefit to an existing social network ? Does the implementation of such devices transform the way employees work together ? As a participant observer in this project, I provide ethnographic data on its development, using Latour & Callon’s “actor-network” theory to explain the making of a new collective actor (Latour, 1991 ; Callon, 1986).

Quality of Working Life is a corporate policy which has been developed since 2007 at EDF ; it is meant to address urgent issues of working conditions and work-related mental health. It relies on a few basic creeds, such as the need of a decentralized approach (solutions are to be found inside each business unit) and the need to make different occupations work together on the improvement of working conditions (managers, human resources, doctors, union representatives, health and safety departments and so on). Before quality of working life became a corporate policy, some of these employees have been willingly “enrolled” in informal networks, designed to discuss (sometimes confidentially) and deal with sensitive work-related issues. The process relies on the involvement of key actors in networks which cross usual boundaries (between departments, fields of expertise, and positions inside the company’s hierarchy). However these networks are not institutionalized. Therefore the QWL approach turns out to be both strong and weak because of its non-prescriptive and practical nature, very similar in its effects to new public management devices such as benchmarking (Didier, 2010).

An online social network is currently designed and implemented to help this new professional community share information and best practices, and capitalize knowledge. It requires the involvement of both top management and future users, and arouses questions, enthusiasm and/or doubts from the stakeholders. The observation of agents of the “real” network coping with a new online instrument (albeit designed to help them work together) reveals how complex the effects of social computing on collaborative work can be. Those are neither mere consequences of the technical device’s specifications, nor strictly determined by organizational strategies (Crozier & Friedberg, 1977). They can be understood as a series of enrolments and counter-enrolments : a construction of interests among actors that do not have once and for all the same strategy and the same preferences (Callon & Law, 1982). Developing a specific social media changes the way people work together on QWL, not on account of the media itself, but because it is enshrined in an existing social (and organizational) context which, in turns, models the media.

The second question remains unanswered at this stage of the project’s evolution : do people working on the improvement of quality of working life really benefit from their new technical environment ? Does it create any kind of “collective intelligence” ? I intend to investigate this last question by observing the online interactions between users from may 2011 (the moment the media will be opened) to june 2011.

REFERENCES

Callon Michel (1986), “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay.” p. 196-233 in Power, Action and Belief: A New Sociology of Knowledge, edited by John Law. London: Routledge & Kegan Paul.

Callon Michel & John Law (1982), “On interests and their transformation : enrolments an counter-enrolments”, Social studies of science, 12, p. 615-625.

Crozier Michel & Erhard Friedberg (1977), L’acteur et le système. Paris : Le Seuil. Didier Emmanuel (2010), “Benchmarking. L’utilisation du chiffre dans la gestion de l’Etat”, Mouvements, 63, p. 155-161.

Latour Bruno (1991), Nous n’avons jamais été modernes. Essai d’anthropologie symétriqueI. Paris : La Découverte.

Probability and problems of ICT outsourcing from advanced nations to the poorest country: the case of Nepal

AUTHOR
Michiko Matsushita

ABSTRACT

The outsourcing may be the last method of cost cutting for the advanced nations which suffer from depression. In these countries, many big companies in manufacturing industries move their producing bases from their nations to the outside, the foreign counties where the cost of land and labors is quite cheaper. Especially some countries in Southeast Asia play the role of cost center for advanced countries.

Even in the ICT fields, same shift of production occurred in these days. Some US big computer companies started to manufacture PC kits in Taiwan (Chinese Taipei) from 1980’s. Nowadays almost all parts of PC sold in advanced nations are produced in China, Thai, Malaysia, Indonesia, Vietnam etc. Although the outsourcing of ICT hardware field became popular, that of the software fields looks like delay. The outsourcing of the knowledge industries is very difficult, because of scarce supply of the educated high level engineers in computer field. Only China and India are the winners in this area. These two countries have large population and they have great higher educational organizations of the world. The labor cost of these countries was very cheep up to this point. But in these days the capitalistic economy was introduced and permeated, the personal expenses become higher. This situation reduces the merit of outsourcing for the advanced nations.

The advanced nations must search the next countries where the cheaper and more excellent engineers are supplied.

The other hand, the poorest countries of the world need the chance of economic growth. The ordinary developing route where Japan and South Korea walked after World War II is too long to catch up the advanced countries. Because the speed of social change is very fast, the manufacturing technology would be old immediately. In these days, the main cities of however poorest countries, they can use the Internet and mobile phones. We think that these ICT media will be the big opportunity to develop those countries. There are big probability and some problems the business between advanced nations and poorest countries.

We researched the ICT companies in Nepal in 2003 and 2004. Nepal keeps the good relationship with India, the neighbor country. Nepali people can go into the India almost freely. The young Nepali of high society studies in Indian universities using English. In the urban area, there are many Internet cafes, citizens and foreign travelers use the Internet with cheap fee. The mobile phones are very popular for the young people in the city. PCs are available from India or Singapore. Many people buy the PCs assembled in Nepal from the import parts.

In Kathmandu, the capital city of Nepal, there are some big software companies which did business with Japanese and US companies. The characteristic of the Nepali IT companies is the labor intensive work. A company did the digitizing of Japanese road map using old map print and photos from the airplane in order to renew the data for car navigation system. Next company did the digitizing of Japanese water pipe map from old blue copies. The managers of these companies speak Japanese, one company employs a Japanese staff to understand and input the Japanese characters on the maps. There was a company which input the chart from the voice data of the hospital in US. After business hours, the voice data of doctors’ medical examination will be sent by the Internet to the company. During the daytime in Kathmandu, the operators type the voice data and return by the start hour of the hospital next day.

Even though the ICT level becomes higher, some amounts of labors incentive ICT work will remain. The cheap labor force of the poorest countries is very important for the advanced countries. The other hands, the laws of these poorest nations are unripe. There is neither law nor guideline for the personal information protection or data retention in Nepal. If the companies in advanced nations do the outsourcing the ICT works to the poorest countries in order to cut the labor cost, the management cost might be increase.

This risk of ICT work is just same as that of cloud computing. The cloud computing is the next key technology for cost down of management of the biggest computer network. The real processing part of the network is so vague as cloud, then each network user do not have to take care the concrete jobs of the computer systems. Cloud systems help the user companies to cut ICT cost, on the assumption of the responsibility and confidence of the cloud service companies. But how we know what the head of the cloud is? One day the cloud data center may be set in Nepal.

For the poorest countries as Nepal, the complete equipment of laws is very important to get the ICT international outsourcing projects and realize the economic growth. The other hand, for the advanced countries as US, EU, Japan, need the new and safe cost center in ICT field.

The Ethics of the Hackers

AUTHOR
Giulia López

ABSTRACT

In present days, many software programmes are being developed and this gives an ever increasing range of options to all kind of users. Yet, many of these users are not fully aware that they are being spied by hackers. Ethical hacking, one of the reactions to this situation, tries to increase security protection by identifying and protecting existing security leaks on systems owned by third parties.

Actually, ethical hacking information security is one of the fastest growing areas in the sector of information technology. Information security would be an easy task if you would only need to install an antivirus programme and build a firewall to prevent attacks. Yet, the truth is that information security should be approached from several levels. In order to do this, measures preventing the unauthorised use, modification, misuse or denial of using facts, data, knowledge and/or capabilities should be adopted and a proactive approach to manage the risk should be taken.

Good ethical hackers must be aware of the methodology the initial hacker has developed in terms of identification, host or target scanning, gaining and maintaining access, and clearing tracks. An ethical hacker must have a deep knowledge of the different methods and tools that a true hacker can use and not only a superficial knowledge of the methodology that the hacker has used.

A safe user must be aware of at least some of these methods and tools, since many hackers attack those people that do not have enough knowledge of the different methods that can be used to spy a system. The software developer must also be aware of such methods and tools, so that he can cancel all loose elements in his programme, which should never be found, even with the use of new tools. Whenever new tools are developed, we can expect that hackers will also develop new hacking strategies, but at least we must make all efforts to prevent our software from being attacked by some of said tools. This should be the main target of ethical hackers.

A successful ethical hacker should have the skills and knowledge of the professional intruder in order to imitate his intrusion and should also have enough elements to minimise the week aspects in his software targeted by the hacker. Ethical hacking means analysing the potential threat to a given system or network by imitating the potential actions that the enemy may carry on. The skills and attitudes an ethical hacker must have are described in this paper, as well as the ways in which the ethical hacker can protect his customers by finding and closing any security gaps in software.

The paper makes a full description of the entire process of ethical hacking, starting from the different skills and knowledge a good ethical hacker must have. Nowadays security efforts need to plan for the unplanned and to anticipate intrusions before these take place. All companies should be aware of the benefits implied by a hiring a professional and trusted ethical hacker. The security of the entire company may be in his hands. Preventing unwanted threats is not only good for all types of companies, it is also a basic requirement for any contemporary corporation. The future is already here and ethical hacking is part of it.

Social hackers encourage activity amongst online groups, and are willing to break social norms to do so. They are, by their nature, controversial. They break norms because they want to live in a network of conceptual meeting spaces. They imply that groups are, of themselves, degenerate. The group mind is for them a real fiction. They want to live as people-in-general and seek to evoke structure which supports that.

Social hackers are disruptive to the extent that they act by their own culture, rather than that of the group. The proper excuse is that they might participate in the group as people-in-general. Hence, the proper ethic for social hacking is to foster structure that supports the individual activity that is framed by the group purpose. This means contributing to the coevolution of that culture through genuine activity. Any semi-automated social hacking should be sensitive to the coevolution of the group culture.

Ethics is a most profound and intensely individual responsibility. It is quite a challenge to clarify such principles for ourselves, and then formulate them generally, so that we might apply them as a network of social hackers leveraging software tools to engage online groups. Social hacking is a behavior open to all. We social hackers must seek and find a universal ethic, a social protocol by which we conduct our open conspiracy.