Constructing the Older User in Home-Based Ubiquitous Computing

AUTHOR
L. Jean Camp and Kalpana Shankar

ABSTRACT

There is a growing body of technology studies literature on the mutual shaping of technology and the “user” (Oudshoorn and Pinch 2003) and how the designer mediates that process. Also, there is a great deal of interest in the creative abilities of users of new technologies to shape, adapt, and resist the design and use of technology in all its phases. Individuals bring their previous experiences, concerns, and anxieties to the process of evaluating and adopting new technologies for their own use; designers, of course, do the same to the design process. Designer and user cultures can potentially clash (Forsythe 1996). In the realm of technology for aging, or gerontology, where the (usually) young male designers are in the business of designing for an aging, often female population, more research is needed (Oudshoorn, Rommes, and Stienstra 2004).

In this paper, we build upon and extend the work of Oudshoorn, Pinch, and others in privileging the use and users by turning our attention to the mutual construction of aging and technology and the processes by which older adult users frame, adopt, adapt, and resist pervasive technology in the home. We present results from a four year study on in-home ubiquitous computing, or ubicomp, for aging in place. Beginning with an interest in creating privacy-sensitive technologies with an eye to end-user control of data, our research developed a suite of prototypes to enable information control of home-based ubicomp by older adults and their family/informal caregivers. After an initial series of focus group evaluations with older adults in which they examined and critiqued these prototypes, we altered, rejected, or stabilized them. Still emphasizing end-user control of privacy, we created a touch screen control panel that would give the end user the ability to examine, control, and block the transmission of presence, motion, and related data generated by the prototypes. Using either a suite of prototypes and the control panel, a suite of prototypes without the control panel, and a “control group” that received a smart phone and a paper calendar, we implemented an eight-week in-situ study of use in the homes of eight elders. We collected brief daily interviews, in-depth weekly interviews, and quantitative information on use and non-use of the prototypes and the control panel we had designed through which the research participants could interact with and manage the prototypes.

Drawing upon results from these studies, we discuss the mutual shaping of aging and technology in two interrelated ways. First, we reflect upon and critically examine our own design assumptions and our construction of a framework of risk and privacy in home based computing, and how this framework reflected and was shaped by our views of aging in the home and the nature of privacy. To give a brief example, much of the research on technologies in the homes of elders focuses on detecting anomalies in activities of daily living (ADLs). Several of the prototypes were designed to give subtle indications of ADLs, depending on where they were placed in the home. While elders did not object to this use, they would much rather use the technology to detect an emergency, such as a fall. However, some of the elders who had themselves been informal caregivers, were appreciative of being able to “see” if someone had gotten up in the morning without having to phone every day. This highlights the possible differences, and tensions, between designers, older adults, and potentially informal caregivers when choosing technologies for aging in place.

In the second section, we also explore how the focus groups and in-situ studies challenged our framings by revealing the ways in which the older adults worked around our technologies and how they perceived privacy. For example, several of the prototypes we developed were “bidirectional” paired technologies, where the older adult would have a reciprocal view into the lives of the people (family members or friends) who had the paired technology. While some elders enjoyed the reciprocal nature of these prototypes that could give them insights into their children’s lives, several were uncomfortable with asking their children to permit this. The elders felt that they might intrude. However, when probed further, they admitted that while they liked the idea but would not ask about it. This suggests that there is a delicate balance of power and negotiation that must be navigated to make these prototypes useful.

Lastly, we explore how non-use and resistance were expressed, primarily in the in situ studies. The users’ framings of privacy (and how they shifted over the course the project), the language of the control panel, and the perceived utility or nonutility of the various prototypes proved to be important considerations. We also consider the role of the various caregivers who received the paired technologies and their potential role in shaping use and non-use. We conclude by discussing the contributions of these findings to designing for values.

REFERENCES

Forsythe, D.E. (1996). New bottles, old wine: hidden cultural assumptions in a computerized explanation system for migraine sufferers. Medical Anthropology Quarterly, 10,4, 551-574.

Oudshoorn, N., Rommes, E., and Stienstra, M. (2004). Configuring the user as everybody: gender and design cultures in information and communication technologies. Science, Technology, and Human Values, 29, 1, 30-63.

Oudshoorn, N. and Pinch, T. (2003). How Users Matter: The Co-Construction of Users and Technologies. Cambridge, MA: The MIT Press.

FACE RECOGNITION: PRIVACY ISSUES AND ENHANCING TECHNIQUES

AUTHOR
Alberto Cammozzo

ABSTRACT

Face recognition techniques and use

Face detection is used to automatically detect or isolate faces from the rest of the picture and –for videos– track a given face or person in the flow of video frames. These algorithms only spot a face in a photo or video. They may be used to enhance privacy, for instance blurring faces of passers-by in pictures taken in public (as Google Street View does). Activist app SecureSmartCam automatically obfuscates photos taken in protests to protect the identity of the protestors. Face detection is used in digital signage (video billboards) to display targeted ads appropriate to the age and sex or mood of people watching. Billboards can also recognize returning visitors to engage interaction with them.

Face matching automatically compares a given face with other images in some archive and selects those where the same person is present. This technology is based on several sophisticated biometric techniques, to match any face even in a video stream with a database of already known faces. It is often used by surveillance services in courthouses, stadiums, malls, sHYPERLINK “http://www.praguepost.com/news/8380-monday-news-briefing.html” transport infrastructures or airports, sometimes combined with iris scan or tracking. Combined with the wealth of publicly available pictures from social networking, matching poses privacy issues: from a single picture it is possible to link together images belonging to a single person. A face matching search engine using Fickr, Picasa and Youtube and social network’s repositories is now absolutely feasible, as demonstrated by prototype softwware or products planned for release. The privacy issues are huge: indiscriminate face matching would allow anyone to match a picture taken with a cellphone with the wealth of pictures he can find on-line: a stalker’s paradise. The “creepiness” of such a service has been acknowledged by Google’s executive Eric Schmidt. Also false positives are worrying: what happens if you are mistaken with some fugitive criminal by one of the many law-enforcement cameras? Or enter a casino being recognized as a problem gambler?

Face identification allows to identify someone linking together pictorial with identity data. Automatic identification requires that the matched face is already linked with identity data in a database. Manual identification happens when identification is either through voluntary enrollment or by someone else with “tagging”. By manually tagging someone you make possible her subsequent identification. Facebook and Picasa already implement automatic face matching of tagged faces, with significant privacy consequences.

Identity verification allows to automatically perform matching and identification on a face that has been previously identified. Certain computer operating systems allow biometric identity verification instead of using traditional credentials. Some firms or schools use face recognition for their time attendance systems. This poses serious threats to privacy if biometric identification data leaks out of the identification systems, since many systems are interoperable: standardized facial biometric “signatures” allow identification even without actual pictures. It is conceivable to plan a global biometric face recognition database.

Privacy issues

Major privacy issues linked to pictorial data and face recognition can be summarized as follows:

(1) unintended use: data collected for some purpose and in a given scope is used for some other purpose in a different scope, for instance surveillance cameras in malls used for marketing purposes;

(2) data retention: the time of retention of pictures (or information coming from matched faces) should be appropriate for the purpose they are collected, and any information has to be deleted when expired. For instance digital signage systems should have a very limited time-span, while time attendance systems or security systems have different needs to reach their intended goal;

(3) context leakage: images taken in some social context of life (affective, family, workplace, in public) should not leak outside that domain. Following this principle, images taken in public places or public events should never be matched without explicit consent, since the public social context assumes near anonymity, especially in political or religious gatherings;

(4) information asymmetry: pictorial data may be used without explicit consent of the person depicted, or even without the knowledge that that information has been collected for some purpose. I may have no hint that there are pictures of me taken in public places and uploaded in repositories; as long as pictures remain anonymous my privacy is quite preserved, but if face matching is applied, this breaks privacy contexts. Someone may easily hold information about me I do not know myself.

Privacy enhancing techniques

Even if matching is the major threat, research on face recognition privacy enhancing techniques concentrates on identification. One possible approach to enhance privacy is splitting the matching and identification tasks [Erkin et al, 2009], partial de-identification of faces [Newton, Sweeney, Malin,2005] or revocation capability [Boult,2006], in order to reinforce people’s trust. Some attempts have been made to develop opt-out techniques to protect privacy in public places: temporary blinding of cctv cameras, wearing a pixelated hood or special camouflage make-up. These and other obfuscation techniques [Brunton Nissenbaum, 2011], like posting on-line “wrong” faces, aim at re-balance information asymmetry.

REFERENCES

Mikhail J. Atallah, vol. 5672 (Springer Berlin Heidelberg, 2009), 235-253

T. Boult, “Robust Distance Measures for Face-Recognition Supporting Revocable Biometric Tokens.,” in Automatic Face and Gesture Recognition, IEEE International Conference on, vol. 0 (Los Alamitos, CA, USA: IEEE Computer Society, 2006), 560-566.

Finn Brunton and Helen Nissenbaum, “Vernacular resistance to data collection and analysis: A political theory of obfuscation,” First Monday, May 2, 2011

Zekeriya Erkin et al., “Privacy-Preserving Face Recognition,” in Privacy Enhancing Technologies, ed. Ian Goldberg

Elaine M. Newton, Latanya Sweeney, and Bradley Malin, “Preserving Privacy by De-Identifying Face Images,” IEEE Transactions on Knowledge and Data Engineering 17, no. 2 (2005): 232-243.

Harry Wechsler, Reliable face recognition methods: system design, implementation and evaluation(Springer, 2007).

Autonomy and Privacy in the context of social networking

AUTHOR
William Bülow and Misse Wester

ABSTRACT

The ethical issues in relation to new developments in information technology are framed in terms of privacy (Van Den Hoven 2008; Rössler 2005; Nissenbaum 1998). Privacy is held to be an important value in western liberal democracies and other values, such as democratic rights, liberty, dignity and autonomy are fundamental to most people, and having a private sphere is a necessary condition for being able to exercise these rights. That is, individuals ought to be able to control information about themselves and how it is being used in order to lead autonomous lives.

Due to new developments in informational technology, a large amount of personal data is stored by different actors in society. While the phenomena of collecting personal data is not new there are mainly two things that have changed in the past decade or so: first, more information is being collected than ever before and secondly, information is not just stored but is subjected to some sort of analysis (Lyon, 2006). Information about individuals isare collected as they act in the normal course of their public lifes. Information is shared in transactions with retailers, mail order companies and medical care. Also, everyone who is using the internet, paying with a credit card are giving up his or her privacy on daily basis (Rössler 2005). However, in social networks, where personal information s released voluntarily, questions of autonomy are more complex as the concept of privacy takes on a different dimension. Social networks are voluntary in the sense that users choose to reveal information about themselves, but at the same time enables other users to share personal information to an unintended audience. These issues will be discussed in this paper. We argue that this other dimension raises new kind of ethical problems and dilemmas in relation to autonomy and privacy interests, especially when the concept of privacy is extended to younger generations.

In order to clarify the ethical aspects of developments in information technology, it is important to indentify how different sorts of information stored about individuals related to the issue of privacy. The protection of informational privacy is held to be important because it is an intrinsic part of our self-understanding as autonomous agents to have control over our own self-presentation and self-determination (Rössler 2005). That is, how we want to present and stage ourselves, to whom and in what context. By the means of controlling the access of information about ourselves to others, we are simultaneously regulating the range of very diverge relations within which we live our lives. The threat to informational privacy posed by prevailing and emerging ICT, then, consists in the potential of reducing the individual’s ability to control information about themselves. In the case of social media however, individuals choose to share information about themselves in a very active way. For example, Facebook has over 500 million users that share personal information with other users (http://www.facebook.com/press/info.php?statistics; accessed on March 1st, 2001). In the year 2010 about 30 % of uses were between 14 and 25 years of age and this group is very active in sharing all kinds of personal information. As information released on the Internet is difficult to regain control over, this younger group might share information now that will later be problematic for their personal integrity. How is the concept of privacy being used to protect future needs?

In Sweden, the Data Inspection Board (DIB) introduced stricter demands in 2008 for public schools to install surveillance cameras in order to increase the safety for the students. The DIB states that cameras can be used in schools at night and over weekends, when school is not in session, but permission for all other usage must be subject to close scrutiny. The underlying reasoning of the DIB is that the integrity of young individuals must be strictly observed since they are not able to foresee the consequences of compromising their integrity (DIB decision 2008-10-01). Combining this view: that younger generations need protection from consequences they cannot foresee, with the increased sharing of personal information on social networks – where does that lead? The reasoning of the DIB resembles a common discussion about autonomy found in the philosophical literature: the one concerning paternalism. That is, the claim that it is sometimes justified to interfere in persons’ behaviour against their will, defended and motivated by the claim that the person will be better of protected from potential harm (http://plato.stanford.edu/entries/paternalism/). While paternalism can be justified in some contexts, it may be questioned whether one really can or should hinder students from using social networks. Facebook is an important part of the everyday experience of students and is a basic tool for and a mirror of social interaction, personal identity, and network building among students (Debatin et. al. 2009). However, information shared on Facebook can sometimes conflict with future preferences and privacy interests of the students. The information which a person openly shares at a certain time of his life might be information which the persons later on in his life want to control the access to.

Based on this sort of reasoning we will address the following questions: do we have a certain obligation to protect the future privacy interests of students now using social networks? How can such interests be protected? Also, how are these claims compatible with the claim that students should be able to interact and willingly share information about themselves on social networks? Clearly these problems are important to address in relation to the widespread use of social networks.

REFERENCES

Debatin, B. Lovejoy, J. P., Horn, A-K., Hughes, B. N. (2009). Facebook and Online Privacy: Attitudes, Behaviours, and Unintended Consequences, Journal of Computer-Mediated Communication, 15, 83-108

Data Inspektionen (DIB) decision 2008-10-01; http://www.datainspektionen.se/Documents/beslut/2009-10-02-Bromma_gymnasium.pdf, avaliable in Swedish

Dworkin, G., Paternalism, The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/sum2010/entries/paternalism/; accessed 2011-03-03.

Lyon, D. (2006), Surveillance, power and everyday life, Oxford Handbook of Information and Communication Technologies, Oxford University Press.

Nissenbaum, H. (1998), Protecting Privacy in an Information Age: The Problem of Privacy in Public, Law and Philosophy, 17, 559-596

Rössler, B. (2005), The Value of Privacy, Cambridge, Polity Press.

van den Hoven, J. (2008), Information Technology, Privacy and the Protection of Personal Data, in. van den Hoven, J and Weckert, J (eds). Information Technology and Moral Philosophy, Cambridge, Cambridge University Press

Wester, M & Sandin P (2010) Privacy and the public – perception and acceptance of various applications of ICT, In Arias-Olivia M, Ward Bynum T, Rogerson S, Torres-Coronas T (eds).

The “backwards, forwards and sideways” changes of ICT, 11th International conference on the Social and Ethical Impacts of Information and Communication Technology (ETHICOMP), p 580-586.

THE TRAJECTORY TO THE “TECHNOLOGICAL SINGULARITY”

AUTHOR
Casey Burkhardt

ABSTRACT

The idea of the technological singularity – the moment at which intelligence embedded in silicon surpasses human intelligence – is a matter of great interest and fascination. To the mind of a layperson, it is at once a source of wonder and apprehension. To those adept in the areas of technology and artificial intelligence, it is almost irresistibly attractive. One the other hand, it is an idea that rests on several assumptions about the nature of human intelligence that are problematic and have long been subject of debate.

This paper discusses the major proposals, originating mainly in the artificial intelligence community, concerning the nature of the technological singularity, its inevitability, and the stages of progress toward the event itself. Attention is given to the problems raised by the concept of the singularity and the controversy that has surrounded the charting of milestones on the path to its realization.

Defining the Technological Singularity

The technological singularity is best defined as a point in time when a combination of computer hardware and artificial intelligence algorithms match or exceed the computational ability of the human brain. In defining this event, great emphasis is placed on the importance of advances in computational potential as well as in artificial intelligence and modeling techniques. It is proposed that such an event would have a staggering effect on humanity to an extent that is difficult, if not impossible, to predict. When this point has been reached, the concept of “recursive self-improvement” would allow technology to improve upon its own level of intelligence at a perpetually accelerated pace.

Difficulties in Pinpointing the Singularity and Its Milestones

One of the largest challenges in defining the technological singularity is that it is not an immediately measureable and instant event. (For the purpose of this abstract, however, let us refer to the singularity as an event, even though estimates of its occurrence are always expressed in terms of an interval of time.) Advances in both hardware and software must be coordinated in a manner that allows artificial intelligence to supersede human intellect. Thus, identifying and measuring the events leading to this point is a nontrivial task. In a series of articles and books, Ray Kurzweil has made a multitude (147 at last count) of predictions that provide some guidance for measuring progress toward the technological singularity. Although most of these estimates do not consist of steps taken explicitly or directly toward the event, they define advancements that are side effects of technological milestones along the way.

The Hardware Problem

In order to reach the technological singularity, humanity must be capable of producing computer hardware that can match or exceed the computational power of the human brain. Many feel that progress in nanotechnology will pave the way for this outcome. There are several projections as to the number of computations per second and the amount of memory required to reach this computational ability. Moore’s Law is often invoked in reference to the timeline for development of processors with the necessary capabilities and Kurzweil has made several bold statements that suggest that this law is applicable beyond the domain of integrated circuitry into the realm of artificial intelligence.

The Software Problem

Computer software is also a limiting factor to the eventuality of the technological singularity. In order to achieve superhuman intelligence as conceived in the definition of the singularity, efficient software capable of modeling and emulating every element of the human brain must be constructed and operate properly. Kurzweil claims that while this is a significant challenge, it will be completed within a reasonable period of time. This is a view with which Vernor Vinge disagrees citing scalability problems within the field of software engineering. The compatibility of the projected software with the targeted advanced hardware is also a matter of concern.

Reconciling a Miscellany of Predictions

Predictions as to the timing and nature of the technological singularity have been made by Venor Vinge, Nick Bostron, Hans Moravec, and Ray Kurzweil. These are evaluated and their merits and deficiencies considered. Several of these predictive models of the technological singularity use similar metrics in their attempt at formulating a target time period for the event. In this section, differences in the predicted trajectory that may be the results of small variances in base assumptions related to time-biased inaccuracies are discussed. Recalculating the predictions with best current figures may provide a more consistent set of singularity timeframe estimates or may reveal fundamental inconsistencies in the assumptions on which these estimates are predicated.

Some Discrepant Views of the Singularity

The possibility of an event like the technological singularity rests on the assumption that all human intelligence is reducible to computing power and that humanity will learn enough about the function of the human mind to “build one” in silicon. This is a view with which many thinkers, including reputable computer scientists like Joseph Weizenbaum, have taken strenuous issue. Thus, in Computer Power and Human Reason, he asks, “What is it about the computer that has brought the view of man as machine to a new level of plausibility? … Ultimately a line dividing human and machine intelligence must be drawn. If there is no such line, then advocates of computerized psychotherapy may be merely heralds of an age in which man has finally been recognized as nothing but a clock-work.” This section explores Weizenbaum’s question through a review of the chronology, elements, and participants in this controversy.

Conclusion

There is an understandable tension between enthusiastic projections of the advance of the techniques of artificial intelligence and the sober recognition of real limitations in our current understanding of human intelligence. This highlights the importance of making ethical and responsible choices with regard to care in formulating further predictions based advances in this area of computing. This is underscored by Weizenbaum’s contention that, “The computer professional … has an enormously important responsibility to be modest in his claims.” Failure to do so in this particular area of interest has the potential to generate unrealistic expectations not only within the field, but also through sensational treatment by the media, in the population as a whole.

REFERENCES

Bostrom, N. 1998. How Long Before Superintelligence? International Journal of Futures Studies, Vol. 2, http://www.nickbostrom.com/superintelligence.html.

Kurzweil, R. 2010. How My Predictions Are Faring. Kurzweil Accelerating Inteligence. http://www.kurzweilai.net/predictions/download.php.

Kurzweil, R. 2000. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Group, New York.

Kurzweil, R. 2006. The Singularity is Near: When Humans Transcend Biology. Penguin Group, New York.

Minsky, M. 1994. Will robots inherit the earth? Scientific American 271(4): 108-11.

Moravec, H. 1998. When Will Computer Hardware Match the Human Brain?. Journal of Transhumanism, Vol. 1, http://www.transhumanist.corn/volume1/moravec.htm.

Vinge, V. 1993. Technological singularity. VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, http://www.frc.ri.cmu.edu/»hpm/book98/com.ch1/vinge.singularity.html.

Weizenbaum, J. 1976. Computer Power and Human Reason. W.H Freeman and Company, San Francisco.

Weizenbaum, J. 1972. On the Impact of the Computer on Society: How does one insult a machine? Science, Vol. 176: 609-14

Bots, Agents, and Other Human Subjects: Ethical Research Online

AUTHOR
Elizabeth A. Buchanan, Ph.D. and Scott Dexter, Ph.D.

ABSTRACT

This paper investigates an emergent divide in research ethics discourses surrounding the concept of “human” subjects in emergent forms of computer science and Internet–based research. Using a cross-disciplinary approach, the authors seek to present novel ways of thinking through and solving applied ethics challenges facing researches in computer and information sciences.

The history of research ethics is based in and around biomedical and behavioral models, and subsequently expanded to include social and humanities-based models of research. Research ethics, in general, are codified in national legislations and in particular, in disciplinary norms. Sometimes such extant regulations and these disciplinary norms are out of sync, as in, for example, the Oral History Association, which successfully argued to be excluded from the purview of formal regulatory ethics boards in the United States and elsewhere. However, ethics boards across the world are conforming to stricter utilitarian models (Buchanan, 2010), often risking individual rights and justice in their practices. This movement may be the result of a number of factors, most recognizably, a more legalistic and litigious environment for researchers and institutions. But, we argue that this movement towards a stricter utilitarianism is also the result of emergent forms of research which minimize the “human” in research—this movement is characteristic of a “research ethics 2.0,” (Buchanan, 2009).

Part of the challenge faced in refining ethics standards to properly account for research conducted on or within a network is that the “raw material” of research tends to be viewed by the researcher as “data objects” rather than “human subjects”. In some projects, say, an effort to develop new network protocols for optimal real-time delivery of video, the data being studied is probably not sensibly construed as being produced by a human (though even here, IP addresses of participating computers may be recorded, and may be linked to humans – is this an issue of ethical concern?) Other projects may focus on segments of a network which are commonly viewed as “social”; such research may focus, for example, on how such “spaces” are structured (eg the topology of social networks); or it may focus on the nature of the trans- and interactions which arise. In all these cases, data which may be connected to a human subject may be easily obtainable and/or necessary for the conduct of the research. Or, in bot research, the evidence of a “human” subject is minimal at best, as more CS research distances the “subject” from the researcher. Instead, a bot or agent is seeking or scraping “data” and risk seems minimal. Thus, an ethics board will look at the benefits of the research more liberally, if at all, and often conclude that the research will be advantageous to more people than it could possibly hurt. This stance undermines the concept of the human in digital and virtual realms, minimizing the extent to which such automated research can affect an individual’s autonomy, privacy, consent, and basic rights.

The emergence of research ethics 2.0 challenges the long-standing process of research, questioning what Forte (2004) has described as scientific takers and native givers. Within the discourse of research ethics 2.0, the accepted principles of human subjects research are interrogated. Such pressing questions as listed below must be discussed within disciplinary specificity but also with the goal of cross-disciplinary best practices:

  • What are public spaces online and what rights do researchers and researched have in such spaces?
  • How is confidentiality, if anonymity is no longer an option, assured in such venues as MUDs, MMRPGs, and other digital worlds?
  • Are “agents” humans?
  • Can a bot research another bot ethically?
  • How–and should–informed consent be obtained (Lawson, 2004; Peden & Flashinski, 2004);
  • Is deception online a norm or a harm?
  • What are harms in an online environment?

And, ultimately, what are the ethical obligations of researchers conducting CS or Internet-enabled research and how do they fit into or diverge from extant human subjects models? Are alternative ethics review models possible, especially in light of emergent models of research, and how should they be constituted

By examining specific cases of CS and Internet-based research, this paper will affect a broad impact on applied ethics, which cross disciplinary boundaries; the real-world practices of researchers from a variety of disciplines; and the practices and policies of ethics boards seeking to ensure human subjects protections in novel environments and research contexts.

Ethical Issues of Social Computing in Corporate Cloud Services

AUTHOR
Engin Bozdag and Jeroen van den Hoven

ABSTRACT

Almost 50 years ago individual users at terminals communicated over telephone lines with a central hub where all the computing was done. The shift back to this model is currently under way. Data and programs are being swept up from desktop PCs and corporate server rooms are installed in “the computer cloud”. When you create a spreadsheet with Google Docs, major components of the software reside on unseen computers, whereabouts unknown, possibly scattered across continents (Hayes, 2008). This paradigm, known as Cloud Computing, allows users to “outsource” their data processing needs to a third-party (Jaeger et al., 2008). Thus, the computing world is rapidly transforming towards developing software for millions to consume as a service, rather than to run on their individual computers (see Buyya et al, 2009). Cloud computing changes the way software is designed and it is becoming ubiquitous. Major IT providers such as Google, Microsoft, Sun, and IBM are all offering cloud services. Intel has launched a vision for the next decade. Government agencies recently have started to use cloud applications and it is expected that a significant part of all the financial, economic and logistic transactions will be performed (semi-) automatically “in the cloud” (Buyya et al, 2009).

Cloud Computing changes the way software is defined, developed, marketed, sold and used. In Cloud Computing, the computer is valued as a gateway to computing services and resources in distant places. It is no longer what is on your desk-(top) or on the server in the basement of your office that counts, but rather which services, facilities and resources you have access to. The software is no longer a digital commodity that you install on your local machine, but rather a service offered by providers that you access and share with other users.

The internet and the web have become full fledged social environments; they facilitate and enhance known and traditional social phenomena by means of social software (e.g. social networking sites and collaborative software). In this way a mesh of computer networks, social software and interacting human persons has come in to being. Most cloud services not only allows the user to store and process a data in remote servers, but it also allows the user to share this data. This leads to an intertwine of cloud services and social computing.

While social phenomena such as crowdsourcing and large-scale social computing projects , cooperative computing are receiving attention, another form of social computing is emerging in cloud services that are storing a large amount of user data. Search engines or other social cloud services such as Facebook cannot solely rely on algorithms to analyze and modify this data, since these may perform imperfectly or may work with bad data generated by spammers, abusers, non-complying users. These services will also use human reviewers, to analyze, understand, filter, remove ,modify, add, sort, categorize the data if necessary. They will use the analyzing capacity of humans, to make judgements when an algorithm cannot decide. For instance a website scoring high in the search engine could be engaged in bad practices, such as spam, violation of TOS of the search engine. This violation might go undetected by the algorithm, but can be detected by a human controller, because of her epistemic and moral capabilities. However, the website in question can be a source of knowledge valuable to many people, leading to a moral dilemma on the part of the human controller, who is in charge to support or supplement the algorithms. If the human agent’s decision on the website is fully left to her own discretion, then this may lead to biases in the search engine results and the computation will differ per human agent, a problem that unassisted algorithms don’t have.

In this paper, we discuss the risks involved in using people as processing units, much like computer processes or subroutines, to provide a cloud service. We argue that, while designing a social computing application, a cloud service provider should have clear policies to minimize this individual judgment, without compromising the functionality it adds to the overall performance of the social computing application.

REFERENCES

[Buyya et al., 2009] Buyya, R., Yeo, C., Venugopal, S., Broberg, J., and Brandic, I. (2009). Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Generation Computer Systems, 25(6):599–616.

[Hayes, 2008] Hayes, B. (2008). Cloud Computing. Commun. ACM, 51(7):9–11.

[Jaeger et al., 2008] Jaeger, P., Lin, J., and Grimes, J. (2008). Cloud computing and information policy: Computing in a policy cloud? Journal of Information Technology & Politics, 5(3):269–283.