“internet” = “intimate white intranet”: The ethics of online sexual racism

AUTHOR
Nathaniel Adam Tobias Coleman

ABSTRACT

Sexual racism is a form of social segregation on the basis of race. Like all forms of social segregation, sexual racism has two faces: that of exclusion (= spacial segregation) and that of exploitation (= role segregation). On the one hand, sexual racism is manifested in the race-based denial of sexual affirmation or activity; on the other, sexual racism is manifested in the offer of sexual affirmation or activity, but only on racially subordinating terms.

Since the social scientific analysis of data from websites that facilitate searches for sexual partners, has concluded that black heterosexual women and black homosexual men who identify as bottom are the least sought-after online, I focus on the sexual racism that perpetrated against members of these two social groups and on the distinctive moral wrongs committed and the distinctive moral harms that obtain, precisely because this sexual racism is perpetrated online.

When perpetrated against black heterosexual women, online sexual racism has the following content. On the one hand, it is manifested (a) in white heterosexual men’s reluctance to affirm black heterosexual women’s sexual appeal and (b) in white heterosexual men’s reluctance to engage in sexual activity with black heterosexual women. On the other hand, it is manifested (a) in white heterosexual men’s eagerness to affirm the sexual appeal only of black heterosexual women who have phenotypically whiter traits (in addition to their phenotypically blacker traits), and (b) in white heterosexual men’s eagerness to engage in sexual activity only covertly and only with hypersexualised, hyperaccessible, and superdisposible black heterosexual women.

When perpetrated against black homosexual men, online sexual racism has the following content. On the one hand, it is manifested (a) in white homosexual men’s reluctance to affirm black homosexual men’s sexual appeal and (b) in white homosexual men’s reluctance to engage in sexual activity with black homosexual men. On the other hand, it is manifested (a) in white homosexual men’s eagerness to affirm the sexual appeal only of, and to engage in sexual activity only with, black male bodies that are (a) all brawn, (b) with no brains, and (c) not bottom. In other words, the black male body must conform to three criteria in order to qualify as a specimen of black male sexual attractiveness: (a) it must have the wherewithal to fuck furiously, (b) it must not be distracted from fucking furiously, and (c) it must fuck furiously.

Taken together, I argue that these two plights, of the the black heterosexual female and the black homosexual male, constitute the morally wrongful white male online ambivalence to black femininity. In order to get access to the good of sexual affirmation and activity, black homosexual men and black heterosexual women require to ‘sign up to’ or to ‘play along with’ the racially subordinating terms of white male sexual attraction to them. Where they can and do achieve this, it is morally harmful (a) because it inhibits their exercise of the freedom of sexual self-definition (a freedom that white males exercise online without constraint), (b) because it lends credibility to the racially subordinating terms, by evincing that blacks are accurately represented in those terms and that blacks enjoy, or at least are comfortable with, being represented in those racially subordinating terms, and (c) because it involves the participation of the black person in her or his own oppression, and in the oppression of blacks generally. By contrast, where blacks will not, cannot, or simply do not play along with the racially subordinating terms of white male attraction to them, there is an imbalance of power in any interracial sexual interaction those blacks enter, rendering them vulnerable, for instance, to a greater willingness to engage in sexual activity that may prove detrimental to their health.

For its part, the internet exacerbates this moral wrong in three unique, and hitherto little discussed, ways. First, the solitary nature of searching for sex online deprives those subject to a relentless barrage of exclusionary attitudes of the most basic mechanism for coping with racism: the ears and the embraces of empathetic others. Insofar as solidarity and mutual support among the excluded is much more available in offline spaces where people search for sex, the internet renders searching for sex uniquely harmful to the victim of sexual racism.

Second, because, online, entry to interpersonal interaction is anonymous and unilateral exit from it is easy, people can, and people do, express, their exclusionary views with greater candour and greater vehemence, than they might, in a face-to-face, or otherwise personalised, encounter. This increase in candour and vehemence leads advertisers and searchers to forget the moral importance of how things seem to others, especially the way in which impoliteness can amount to moral disrespect.

Third, the emphasis that is placed, both by website designers (who invite advertisers to specify their race, and the race of those they are willing to meet for sexual interaction, from a drop-down menu of conventional racial groupings) and by advertisers (who use use text in their advertisements and private messages to express exclusionary racial preferences), on the physical body that sits behind the computer screen, increases the salience and significance of bodily capital in society. Bodily capital is the degree to which the body that a person inhabits corresponds with whatever ideals of beauty dominate in society.

As ever-greater concentrations of bodily capital become more acceptable to demand from the persons whom we deign to encounter and with whom we deign to enter into intimate interpersonal interaction, companionate capital, something that no one person can accumulate by herself, but which must rather be realised in the interpersonal activity of jointly deliberating about, jointly agreeing, and jointly executing shared goals over a significant period of time, ceases to be valued and ceases to be produced.

This is of great moral concern. since (a) companionship (quite independent of any concomitant sexual pleasure derived from the body of one’s companion) is necessary for having self-esteem, and thus for the pursuit of a conception of the good, and so for human flourishing, and (b) inter-group companionship between members of a group subordinate and stigmatised in society and members of a group dominant in that society is necessary for the complete destigmatisation of the subordinate and stigmatised social group.

ABSTRACT

Anderson, Elizabeth S. 2010. The imperative of integration. Princeton, NJ: Princeton University Press.

Daniels, Jessie. 2009. Cyber racism: White supremacy online and the new attack on civil rights. Lanham, MD: Rowman and Littlefield.

Holt, Thomas J., & Kristie R. Blevins. 2007 Examining sex work from the client’s perspective: assessing johns using on-line data. Deviant Behavior 28:333-354.

Feliciano, Cynthia, Belinda Robnett, & Golnaz Komaie. 2009. Gendered racial exclusion among white internet daters. Social Science Research 38:39–54.

Green, Adam Isaiah. 2008. The social organization of desire: The sexual fields approach. Sociological Theory 26(1):25-50.

Grekin, Elly. 2009. Ethnic identity in an online world. MA thesis, Ohio State University,

Hakim, Catherine. 2010. Erotic capital. European Sociological Review 26(5):499–518.

Hitsch, Günter J., Ali Hortaçsu, & Dan Ariely. 2010. What makes you click?: Mate preferences in online dating. Quantitative Marketing and Economics 8(4): 393-427.

Hughes, Donna M. 2005. Race and prostitution in the United States. Available here: http:// www.uri.edu/artsci/wms/hughes/pubtrfrep.htm.

Kang, Jerry. Cyber-race. Harvard Law Review 113:1130-1208.

Kudler, Benjamin Aaron. 2007. Confronting race and racism: Social identity in African American gay men. MA diss. Smith College School for Social Work, Northampton, Massachusetts.

Levitt, Steven D., & Stephen J. Dubner. 2005. Freakonomics: A rogue economist explores the hidden side of everything. William Morrow.

McKeown, Eamonn, Simon Nelson, Jane Anderson, Nicola Low, & Jonathan Elford. 2010. Disclosure, discrimination and desire: Experiences of Black and South Asian gay men in Britain. Culture, Health & Sexuality 12(7):843–885.

Nakamura, Lisa. 2002 Cybertypes: Race, ethnicity, and identity on the internet. Routledge.

Paul, Jay P, George Ayala, & Kyung-Hee Choi. 2010. Internet sex ads for MSM and partner selection criteria: The potency of race/ethnicity online. Journal of Sex Research. 47(6): 528–538.

Paulsen, Ronald. 2010. The practice of bodily differentiation: Overweight and internet dating on the market of intimacy. Sociologisk forskning. 41(1):5-28.

Phua, Voon Chin, & Gayle Kaufman. 2003. The crossroads of race and sexuality: Date selection among men in internet “personal” ads. Journal of Family Issues 24:981-994.

Plummer, Mary Dianne. 2007. Sexual racism in gay communities: Negotiating the ethnosexual marketplace. PhD dissertation, University of Washington.

Putnam, Robert D. 2000. Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.

Robinson, Russell K. 2008. Structural dimensions of romantic preferences. Fordham Law Review 76:2787-2819.

Robnett, Belinda, & Cynthia Feliciano. forthcoming. Patterns of racial-ethnic exclusion by internet daters. Social Forces.

Rudder, Christian. 2009a. How your race affects the messages you get. oktrends: Dating research from OkCupid. Available here: http://blog.okcupid.com/index.php/your-race-affects-whetherpeople- write-you-back/.

– – – 2009b. Rudder same-sex data for race vs. reply rates. oktrends: Dating research from OkCupid. Available here: http://blog.okcupid.com/index.php/same-sex-data-race-reply/.

— – 2009c. How races and religions match in online dating. oktrends: Dating research from OkCupid. Available here: http://blog.okcupid.com/index.php/how-races-and-religions-match-in-onlinedating/# match-discussion

Shrage, Laurie. 1992. Is sexual desire raced?: The social meaning of interracial prostitution. Journal of Social Philosophy, 23: 42–51.

Sweeney, Kathryn A., & Anne L. Borden. 2009. Crossing the line online: Racial preference of internet daters. Marriage & Family Review 45:740–760.

Yancey, George. 2009. Crossracial differences in the racial preferences of potential dating partners. The Sociological Quarterly 50:121–143.

Constructing the Older User in Home-Based Ubiquitous Computing

AUTHOR
L. Jean Camp and Kalpana Shankar

ABSTRACT

There is a growing body of technology studies literature on the mutual shaping of technology and the “user” (Oudshoorn and Pinch 2003) and how the designer mediates that process. Also, there is a great deal of interest in the creative abilities of users of new technologies to shape, adapt, and resist the design and use of technology in all its phases. Individuals bring their previous experiences, concerns, and anxieties to the process of evaluating and adopting new technologies for their own use; designers, of course, do the same to the design process. Designer and user cultures can potentially clash (Forsythe 1996). In the realm of technology for aging, or gerontology, where the (usually) young male designers are in the business of designing for an aging, often female population, more research is needed (Oudshoorn, Rommes, and Stienstra 2004).

In this paper, we build upon and extend the work of Oudshoorn, Pinch, and others in privileging the use and users by turning our attention to the mutual construction of aging and technology and the processes by which older adult users frame, adopt, adapt, and resist pervasive technology in the home. We present results from a four year study on in-home ubiquitous computing, or ubicomp, for aging in place. Beginning with an interest in creating privacy-sensitive technologies with an eye to end-user control of data, our research developed a suite of prototypes to enable information control of home-based ubicomp by older adults and their family/informal caregivers. After an initial series of focus group evaluations with older adults in which they examined and critiqued these prototypes, we altered, rejected, or stabilized them. Still emphasizing end-user control of privacy, we created a touch screen control panel that would give the end user the ability to examine, control, and block the transmission of presence, motion, and related data generated by the prototypes. Using either a suite of prototypes and the control panel, a suite of prototypes without the control panel, and a “control group” that received a smart phone and a paper calendar, we implemented an eight-week in-situ study of use in the homes of eight elders. We collected brief daily interviews, in-depth weekly interviews, and quantitative information on use and non-use of the prototypes and the control panel we had designed through which the research participants could interact with and manage the prototypes.

Drawing upon results from these studies, we discuss the mutual shaping of aging and technology in two interrelated ways. First, we reflect upon and critically examine our own design assumptions and our construction of a framework of risk and privacy in home based computing, and how this framework reflected and was shaped by our views of aging in the home and the nature of privacy. To give a brief example, much of the research on technologies in the homes of elders focuses on detecting anomalies in activities of daily living (ADLs). Several of the prototypes were designed to give subtle indications of ADLs, depending on where they were placed in the home. While elders did not object to this use, they would much rather use the technology to detect an emergency, such as a fall. However, some of the elders who had themselves been informal caregivers, were appreciative of being able to “see” if someone had gotten up in the morning without having to phone every day. This highlights the possible differences, and tensions, between designers, older adults, and potentially informal caregivers when choosing technologies for aging in place.

In the second section, we also explore how the focus groups and in-situ studies challenged our framings by revealing the ways in which the older adults worked around our technologies and how they perceived privacy. For example, several of the prototypes we developed were “bidirectional” paired technologies, where the older adult would have a reciprocal view into the lives of the people (family members or friends) who had the paired technology. While some elders enjoyed the reciprocal nature of these prototypes that could give them insights into their children’s lives, several were uncomfortable with asking their children to permit this. The elders felt that they might intrude. However, when probed further, they admitted that while they liked the idea but would not ask about it. This suggests that there is a delicate balance of power and negotiation that must be navigated to make these prototypes useful.

Lastly, we explore how non-use and resistance were expressed, primarily in the in situ studies. The users’ framings of privacy (and how they shifted over the course the project), the language of the control panel, and the perceived utility or nonutility of the various prototypes proved to be important considerations. We also consider the role of the various caregivers who received the paired technologies and their potential role in shaping use and non-use. We conclude by discussing the contributions of these findings to designing for values.

REFERENCES

Forsythe, D.E. (1996). New bottles, old wine: hidden cultural assumptions in a computerized explanation system for migraine sufferers. Medical Anthropology Quarterly, 10,4, 551-574.

Oudshoorn, N., Rommes, E., and Stienstra, M. (2004). Configuring the user as everybody: gender and design cultures in information and communication technologies. Science, Technology, and Human Values, 29, 1, 30-63.

Oudshoorn, N. and Pinch, T. (2003). How Users Matter: The Co-Construction of Users and Technologies. Cambridge, MA: The MIT Press.

FACE RECOGNITION: PRIVACY ISSUES AND ENHANCING TECHNIQUES

AUTHOR
Alberto Cammozzo

ABSTRACT

Face recognition techniques and use

Face detection is used to automatically detect or isolate faces from the rest of the picture and –for videos– track a given face or person in the flow of video frames. These algorithms only spot a face in a photo or video. They may be used to enhance privacy, for instance blurring faces of passers-by in pictures taken in public (as Google Street View does). Activist app SecureSmartCam automatically obfuscates photos taken in protests to protect the identity of the protestors. Face detection is used in digital signage (video billboards) to display targeted ads appropriate to the age and sex or mood of people watching. Billboards can also recognize returning visitors to engage interaction with them.

Face matching automatically compares a given face with other images in some archive and selects those where the same person is present. This technology is based on several sophisticated biometric techniques, to match any face even in a video stream with a database of already known faces. It is often used by surveillance services in courthouses, stadiums, malls, sHYPERLINK “http://www.praguepost.com/news/8380-monday-news-briefing.html” transport infrastructures or airports, sometimes combined with iris scan or tracking. Combined with the wealth of publicly available pictures from social networking, matching poses privacy issues: from a single picture it is possible to link together images belonging to a single person. A face matching search engine using Fickr, Picasa and Youtube and social network’s repositories is now absolutely feasible, as demonstrated by prototype softwware or products planned for release. The privacy issues are huge: indiscriminate face matching would allow anyone to match a picture taken with a cellphone with the wealth of pictures he can find on-line: a stalker’s paradise. The “creepiness” of such a service has been acknowledged by Google’s executive Eric Schmidt. Also false positives are worrying: what happens if you are mistaken with some fugitive criminal by one of the many law-enforcement cameras? Or enter a casino being recognized as a problem gambler?

Face identification allows to identify someone linking together pictorial with identity data. Automatic identification requires that the matched face is already linked with identity data in a database. Manual identification happens when identification is either through voluntary enrollment or by someone else with “tagging”. By manually tagging someone you make possible her subsequent identification. Facebook and Picasa already implement automatic face matching of tagged faces, with significant privacy consequences.

Identity verification allows to automatically perform matching and identification on a face that has been previously identified. Certain computer operating systems allow biometric identity verification instead of using traditional credentials. Some firms or schools use face recognition for their time attendance systems. This poses serious threats to privacy if biometric identification data leaks out of the identification systems, since many systems are interoperable: standardized facial biometric “signatures” allow identification even without actual pictures. It is conceivable to plan a global biometric face recognition database.

Privacy issues

Major privacy issues linked to pictorial data and face recognition can be summarized as follows:

(1) unintended use: data collected for some purpose and in a given scope is used for some other purpose in a different scope, for instance surveillance cameras in malls used for marketing purposes;

(2) data retention: the time of retention of pictures (or information coming from matched faces) should be appropriate for the purpose they are collected, and any information has to be deleted when expired. For instance digital signage systems should have a very limited time-span, while time attendance systems or security systems have different needs to reach their intended goal;

(3) context leakage: images taken in some social context of life (affective, family, workplace, in public) should not leak outside that domain. Following this principle, images taken in public places or public events should never be matched without explicit consent, since the public social context assumes near anonymity, especially in political or religious gatherings;

(4) information asymmetry: pictorial data may be used without explicit consent of the person depicted, or even without the knowledge that that information has been collected for some purpose. I may have no hint that there are pictures of me taken in public places and uploaded in repositories; as long as pictures remain anonymous my privacy is quite preserved, but if face matching is applied, this breaks privacy contexts. Someone may easily hold information about me I do not know myself.

Privacy enhancing techniques

Even if matching is the major threat, research on face recognition privacy enhancing techniques concentrates on identification. One possible approach to enhance privacy is splitting the matching and identification tasks [Erkin et al, 2009], partial de-identification of faces [Newton, Sweeney, Malin,2005] or revocation capability [Boult,2006], in order to reinforce people’s trust. Some attempts have been made to develop opt-out techniques to protect privacy in public places: temporary blinding of cctv cameras, wearing a pixelated hood or special camouflage make-up. These and other obfuscation techniques [Brunton Nissenbaum, 2011], like posting on-line “wrong” faces, aim at re-balance information asymmetry.

REFERENCES

Mikhail J. Atallah, vol. 5672 (Springer Berlin Heidelberg, 2009), 235-253

T. Boult, “Robust Distance Measures for Face-Recognition Supporting Revocable Biometric Tokens.,” in Automatic Face and Gesture Recognition, IEEE International Conference on, vol. 0 (Los Alamitos, CA, USA: IEEE Computer Society, 2006), 560-566.

Finn Brunton and Helen Nissenbaum, “Vernacular resistance to data collection and analysis: A political theory of obfuscation,” First Monday, May 2, 2011

Zekeriya Erkin et al., “Privacy-Preserving Face Recognition,” in Privacy Enhancing Technologies, ed. Ian Goldberg

Elaine M. Newton, Latanya Sweeney, and Bradley Malin, “Preserving Privacy by De-Identifying Face Images,” IEEE Transactions on Knowledge and Data Engineering 17, no. 2 (2005): 232-243.

Harry Wechsler, Reliable face recognition methods: system design, implementation and evaluation(Springer, 2007).

Autonomy and Privacy in the context of social networking

AUTHOR
William Bülow and Misse Wester

ABSTRACT

The ethical issues in relation to new developments in information technology are framed in terms of privacy (Van Den Hoven 2008; Rössler 2005; Nissenbaum 1998). Privacy is held to be an important value in western liberal democracies and other values, such as democratic rights, liberty, dignity and autonomy are fundamental to most people, and having a private sphere is a necessary condition for being able to exercise these rights. That is, individuals ought to be able to control information about themselves and how it is being used in order to lead autonomous lives.

Due to new developments in informational technology, a large amount of personal data is stored by different actors in society. While the phenomena of collecting personal data is not new there are mainly two things that have changed in the past decade or so: first, more information is being collected than ever before and secondly, information is not just stored but is subjected to some sort of analysis (Lyon, 2006). Information about individuals isare collected as they act in the normal course of their public lifes. Information is shared in transactions with retailers, mail order companies and medical care. Also, everyone who is using the internet, paying with a credit card are giving up his or her privacy on daily basis (Rössler 2005). However, in social networks, where personal information s released voluntarily, questions of autonomy are more complex as the concept of privacy takes on a different dimension. Social networks are voluntary in the sense that users choose to reveal information about themselves, but at the same time enables other users to share personal information to an unintended audience. These issues will be discussed in this paper. We argue that this other dimension raises new kind of ethical problems and dilemmas in relation to autonomy and privacy interests, especially when the concept of privacy is extended to younger generations.

In order to clarify the ethical aspects of developments in information technology, it is important to indentify how different sorts of information stored about individuals related to the issue of privacy. The protection of informational privacy is held to be important because it is an intrinsic part of our self-understanding as autonomous agents to have control over our own self-presentation and self-determination (Rössler 2005). That is, how we want to present and stage ourselves, to whom and in what context. By the means of controlling the access of information about ourselves to others, we are simultaneously regulating the range of very diverge relations within which we live our lives. The threat to informational privacy posed by prevailing and emerging ICT, then, consists in the potential of reducing the individual’s ability to control information about themselves. In the case of social media however, individuals choose to share information about themselves in a very active way. For example, Facebook has over 500 million users that share personal information with other users (http://www.facebook.com/press/info.php?statistics; accessed on March 1st, 2001). In the year 2010 about 30 % of uses were between 14 and 25 years of age and this group is very active in sharing all kinds of personal information. As information released on the Internet is difficult to regain control over, this younger group might share information now that will later be problematic for their personal integrity. How is the concept of privacy being used to protect future needs?

In Sweden, the Data Inspection Board (DIB) introduced stricter demands in 2008 for public schools to install surveillance cameras in order to increase the safety for the students. The DIB states that cameras can be used in schools at night and over weekends, when school is not in session, but permission for all other usage must be subject to close scrutiny. The underlying reasoning of the DIB is that the integrity of young individuals must be strictly observed since they are not able to foresee the consequences of compromising their integrity (DIB decision 2008-10-01). Combining this view: that younger generations need protection from consequences they cannot foresee, with the increased sharing of personal information on social networks – where does that lead? The reasoning of the DIB resembles a common discussion about autonomy found in the philosophical literature: the one concerning paternalism. That is, the claim that it is sometimes justified to interfere in persons’ behaviour against their will, defended and motivated by the claim that the person will be better of protected from potential harm (http://plato.stanford.edu/entries/paternalism/). While paternalism can be justified in some contexts, it may be questioned whether one really can or should hinder students from using social networks. Facebook is an important part of the everyday experience of students and is a basic tool for and a mirror of social interaction, personal identity, and network building among students (Debatin et. al. 2009). However, information shared on Facebook can sometimes conflict with future preferences and privacy interests of the students. The information which a person openly shares at a certain time of his life might be information which the persons later on in his life want to control the access to.

Based on this sort of reasoning we will address the following questions: do we have a certain obligation to protect the future privacy interests of students now using social networks? How can such interests be protected? Also, how are these claims compatible with the claim that students should be able to interact and willingly share information about themselves on social networks? Clearly these problems are important to address in relation to the widespread use of social networks.

REFERENCES

Debatin, B. Lovejoy, J. P., Horn, A-K., Hughes, B. N. (2009). Facebook and Online Privacy: Attitudes, Behaviours, and Unintended Consequences, Journal of Computer-Mediated Communication, 15, 83-108

Data Inspektionen (DIB) decision 2008-10-01; http://www.datainspektionen.se/Documents/beslut/2009-10-02-Bromma_gymnasium.pdf, avaliable in Swedish

Dworkin, G., Paternalism, The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/sum2010/entries/paternalism/; accessed 2011-03-03.

Lyon, D. (2006), Surveillance, power and everyday life, Oxford Handbook of Information and Communication Technologies, Oxford University Press.

Nissenbaum, H. (1998), Protecting Privacy in an Information Age: The Problem of Privacy in Public, Law and Philosophy, 17, 559-596

Rössler, B. (2005), The Value of Privacy, Cambridge, Polity Press.

van den Hoven, J. (2008), Information Technology, Privacy and the Protection of Personal Data, in. van den Hoven, J and Weckert, J (eds). Information Technology and Moral Philosophy, Cambridge, Cambridge University Press

Wester, M & Sandin P (2010) Privacy and the public – perception and acceptance of various applications of ICT, In Arias-Olivia M, Ward Bynum T, Rogerson S, Torres-Coronas T (eds).

The “backwards, forwards and sideways” changes of ICT, 11th International conference on the Social and Ethical Impacts of Information and Communication Technology (ETHICOMP), p 580-586.

THE TRAJECTORY TO THE “TECHNOLOGICAL SINGULARITY”

AUTHOR
Casey Burkhardt

ABSTRACT

The idea of the technological singularity – the moment at which intelligence embedded in silicon surpasses human intelligence – is a matter of great interest and fascination. To the mind of a layperson, it is at once a source of wonder and apprehension. To those adept in the areas of technology and artificial intelligence, it is almost irresistibly attractive. One the other hand, it is an idea that rests on several assumptions about the nature of human intelligence that are problematic and have long been subject of debate.

This paper discusses the major proposals, originating mainly in the artificial intelligence community, concerning the nature of the technological singularity, its inevitability, and the stages of progress toward the event itself. Attention is given to the problems raised by the concept of the singularity and the controversy that has surrounded the charting of milestones on the path to its realization.

Defining the Technological Singularity

The technological singularity is best defined as a point in time when a combination of computer hardware and artificial intelligence algorithms match or exceed the computational ability of the human brain. In defining this event, great emphasis is placed on the importance of advances in computational potential as well as in artificial intelligence and modeling techniques. It is proposed that such an event would have a staggering effect on humanity to an extent that is difficult, if not impossible, to predict. When this point has been reached, the concept of “recursive self-improvement” would allow technology to improve upon its own level of intelligence at a perpetually accelerated pace.

Difficulties in Pinpointing the Singularity and Its Milestones

One of the largest challenges in defining the technological singularity is that it is not an immediately measureable and instant event. (For the purpose of this abstract, however, let us refer to the singularity as an event, even though estimates of its occurrence are always expressed in terms of an interval of time.) Advances in both hardware and software must be coordinated in a manner that allows artificial intelligence to supersede human intellect. Thus, identifying and measuring the events leading to this point is a nontrivial task. In a series of articles and books, Ray Kurzweil has made a multitude (147 at last count) of predictions that provide some guidance for measuring progress toward the technological singularity. Although most of these estimates do not consist of steps taken explicitly or directly toward the event, they define advancements that are side effects of technological milestones along the way.

The Hardware Problem

In order to reach the technological singularity, humanity must be capable of producing computer hardware that can match or exceed the computational power of the human brain. Many feel that progress in nanotechnology will pave the way for this outcome. There are several projections as to the number of computations per second and the amount of memory required to reach this computational ability. Moore’s Law is often invoked in reference to the timeline for development of processors with the necessary capabilities and Kurzweil has made several bold statements that suggest that this law is applicable beyond the domain of integrated circuitry into the realm of artificial intelligence.

The Software Problem

Computer software is also a limiting factor to the eventuality of the technological singularity. In order to achieve superhuman intelligence as conceived in the definition of the singularity, efficient software capable of modeling and emulating every element of the human brain must be constructed and operate properly. Kurzweil claims that while this is a significant challenge, it will be completed within a reasonable period of time. This is a view with which Vernor Vinge disagrees citing scalability problems within the field of software engineering. The compatibility of the projected software with the targeted advanced hardware is also a matter of concern.

Reconciling a Miscellany of Predictions

Predictions as to the timing and nature of the technological singularity have been made by Venor Vinge, Nick Bostron, Hans Moravec, and Ray Kurzweil. These are evaluated and their merits and deficiencies considered. Several of these predictive models of the technological singularity use similar metrics in their attempt at formulating a target time period for the event. In this section, differences in the predicted trajectory that may be the results of small variances in base assumptions related to time-biased inaccuracies are discussed. Recalculating the predictions with best current figures may provide a more consistent set of singularity timeframe estimates or may reveal fundamental inconsistencies in the assumptions on which these estimates are predicated.

Some Discrepant Views of the Singularity

The possibility of an event like the technological singularity rests on the assumption that all human intelligence is reducible to computing power and that humanity will learn enough about the function of the human mind to “build one” in silicon. This is a view with which many thinkers, including reputable computer scientists like Joseph Weizenbaum, have taken strenuous issue. Thus, in Computer Power and Human Reason, he asks, “What is it about the computer that has brought the view of man as machine to a new level of plausibility? … Ultimately a line dividing human and machine intelligence must be drawn. If there is no such line, then advocates of computerized psychotherapy may be merely heralds of an age in which man has finally been recognized as nothing but a clock-work.” This section explores Weizenbaum’s question through a review of the chronology, elements, and participants in this controversy.

Conclusion

There is an understandable tension between enthusiastic projections of the advance of the techniques of artificial intelligence and the sober recognition of real limitations in our current understanding of human intelligence. This highlights the importance of making ethical and responsible choices with regard to care in formulating further predictions based advances in this area of computing. This is underscored by Weizenbaum’s contention that, “The computer professional … has an enormously important responsibility to be modest in his claims.” Failure to do so in this particular area of interest has the potential to generate unrealistic expectations not only within the field, but also through sensational treatment by the media, in the population as a whole.

REFERENCES

Bostrom, N. 1998. How Long Before Superintelligence? International Journal of Futures Studies, Vol. 2, http://www.nickbostrom.com/superintelligence.html.

Kurzweil, R. 2010. How My Predictions Are Faring. Kurzweil Accelerating Inteligence. http://www.kurzweilai.net/predictions/download.php.

Kurzweil, R. 2000. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Group, New York.

Kurzweil, R. 2006. The Singularity is Near: When Humans Transcend Biology. Penguin Group, New York.

Minsky, M. 1994. Will robots inherit the earth? Scientific American 271(4): 108-11.

Moravec, H. 1998. When Will Computer Hardware Match the Human Brain?. Journal of Transhumanism, Vol. 1, http://www.transhumanist.corn/volume1/moravec.htm.

Vinge, V. 1993. Technological singularity. VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, http://www.frc.ri.cmu.edu/»hpm/book98/com.ch1/vinge.singularity.html.

Weizenbaum, J. 1976. Computer Power and Human Reason. W.H Freeman and Company, San Francisco.

Weizenbaum, J. 1972. On the Impact of the Computer on Society: How does one insult a machine? Science, Vol. 176: 609-14

Bots, Agents, and Other Human Subjects: Ethical Research Online

AUTHOR
Elizabeth A. Buchanan, Ph.D. and Scott Dexter, Ph.D.

ABSTRACT

This paper investigates an emergent divide in research ethics discourses surrounding the concept of “human” subjects in emergent forms of computer science and Internet–based research. Using a cross-disciplinary approach, the authors seek to present novel ways of thinking through and solving applied ethics challenges facing researches in computer and information sciences.

The history of research ethics is based in and around biomedical and behavioral models, and subsequently expanded to include social and humanities-based models of research. Research ethics, in general, are codified in national legislations and in particular, in disciplinary norms. Sometimes such extant regulations and these disciplinary norms are out of sync, as in, for example, the Oral History Association, which successfully argued to be excluded from the purview of formal regulatory ethics boards in the United States and elsewhere. However, ethics boards across the world are conforming to stricter utilitarian models (Buchanan, 2010), often risking individual rights and justice in their practices. This movement may be the result of a number of factors, most recognizably, a more legalistic and litigious environment for researchers and institutions. But, we argue that this movement towards a stricter utilitarianism is also the result of emergent forms of research which minimize the “human” in research—this movement is characteristic of a “research ethics 2.0,” (Buchanan, 2009).

Part of the challenge faced in refining ethics standards to properly account for research conducted on or within a network is that the “raw material” of research tends to be viewed by the researcher as “data objects” rather than “human subjects”. In some projects, say, an effort to develop new network protocols for optimal real-time delivery of video, the data being studied is probably not sensibly construed as being produced by a human (though even here, IP addresses of participating computers may be recorded, and may be linked to humans – is this an issue of ethical concern?) Other projects may focus on segments of a network which are commonly viewed as “social”; such research may focus, for example, on how such “spaces” are structured (eg the topology of social networks); or it may focus on the nature of the trans- and interactions which arise. In all these cases, data which may be connected to a human subject may be easily obtainable and/or necessary for the conduct of the research. Or, in bot research, the evidence of a “human” subject is minimal at best, as more CS research distances the “subject” from the researcher. Instead, a bot or agent is seeking or scraping “data” and risk seems minimal. Thus, an ethics board will look at the benefits of the research more liberally, if at all, and often conclude that the research will be advantageous to more people than it could possibly hurt. This stance undermines the concept of the human in digital and virtual realms, minimizing the extent to which such automated research can affect an individual’s autonomy, privacy, consent, and basic rights.

The emergence of research ethics 2.0 challenges the long-standing process of research, questioning what Forte (2004) has described as scientific takers and native givers. Within the discourse of research ethics 2.0, the accepted principles of human subjects research are interrogated. Such pressing questions as listed below must be discussed within disciplinary specificity but also with the goal of cross-disciplinary best practices:

  • What are public spaces online and what rights do researchers and researched have in such spaces?
  • How is confidentiality, if anonymity is no longer an option, assured in such venues as MUDs, MMRPGs, and other digital worlds?
  • Are “agents” humans?
  • Can a bot research another bot ethically?
  • How–and should–informed consent be obtained (Lawson, 2004; Peden & Flashinski, 2004);
  • Is deception online a norm or a harm?
  • What are harms in an online environment?

And, ultimately, what are the ethical obligations of researchers conducting CS or Internet-enabled research and how do they fit into or diverge from extant human subjects models? Are alternative ethics review models possible, especially in light of emergent models of research, and how should they be constituted

By examining specific cases of CS and Internet-based research, this paper will affect a broad impact on applied ethics, which cross disciplinary boundaries; the real-world practices of researchers from a variety of disciplines; and the practices and policies of ethics boards seeking to ensure human subjects protections in novel environments and research contexts.