Brain-Computer Interfaces: a technical approach to supporting privacy

Kirsten Wahlstrom, Ben Fairweather and Helen Ashman



Brain-Computer Interfaces (BCIs) facilitate communication between a brain and a computer and can be categorised according to function: interpretation of neural activity, stimulation of neural activity, and interpretation-stimulation. Warwick’s self-experiments with implants in the interpretation-stimulation category (Warwick and Gasson, 2004) demonstrate the technical feasibility of extending the human nervous system beyond its biological limits to other systems, and to other people, via the internet. Furthermore, there have been recent advances in interpreting passive neural activity (Coffey et al., 2010) and also in concurrent interpretation of visual and motor intentional neural activity (Allison et al., 2010). A future BCI (fBCI) integrating these technical features would concurrently interpret both intentional and passive neural activity in order to communicate information to systems and other people via the Internet.

In addition to these research advances, BCIs interpreting intentional neural activity via electroencephalography (EEG) are available to consumers (Emotiv Systems, Intendix). Should fBCI be commercially viable, an ethical and legal obligation to support privacy will exist.

Privacy emerges from a society’s communication practices (Westin, 2003). Although acculturation plays a role in shaping privacy expectations, the extent to which one person requires privacy may differ to that required by another (Gavison, 1980). In addition, a person’s perception of privacy is dependent on context (Solove, 2006). For example, I am likely to openly disclose details related to my health to my father, judiciously disclose these details to my friends, and withhold them completely from strangers. Thus, privacy requirements are diverse and susceptible to change.

Privacy is a component of freedom, autonomy and identity. When using technologies, people assert independence and autonomy by declining to participate, or by using anonymity or misinformation to create and maintain privacy (Lenhart and Madden, 2007, Fuster, 2009). When people opt out, adopt anonymity or engage in misinformation, the effectiveness of any technology reliant upon accurate and representative data is compromised.

The conceptualisation of privacy as culturally shaped and unique for each person and their immediate context is well understood, long-standing, and widely applied by law- and policy-makers. It forms the basis for legislative and other regulatory approaches such as the Australian Privacy Act, the EU’s Privacy Directives and the OECD’s Guidelines. These legal obligations, and further ethical obligations (Burkert, 1997), mandate support for privacy with respect to technologies. In addition, if technologies support privacy, people are more likely to provide accurate information, adding value to the technology itself. However, to the authors’ knowledge, there have been no investigations of technical approaches to supporting privacy in BCIs. This paper presents a conceptual model for consideration and critique.

BCI technology

BCIs identify and measure the electrical activity associated with activating specific neural pathways (Berger et al., 2007). These measurements are then applied to the control of external systems (Hochberg et al., 2006). With respect to BCI technologies, the identification and measurement of neural activity has been achieved with surgically invasive and non-invasive approaches. While surgical BCIs identify and measure neural activity with a higher level of accuracy, non-surgical approaches carry fewer health risks. Thus, there has been interest in improving the accuracy of non-surgical BCIs (Allison et al., 2010).

BCIs have neural networks which must be trained to identify a person’s neural activities and then to map specific neural activities to specific intentions. For example, consider a scenario in which Ann has purchased a new BCI to use with her mobile phone. She must spend time training the BCI to recognise the unique pattern of neural activity that matches with imagining each person in her mobile phone’s address book and to recognise neural activity corresponding to the ‘call’ and ‘hang up’ commands.

Conceptual model

If BCIs can identify and measure neural activity, then they can also identify and measure a person’s privacy perception and requirement. The person’s privacy requirement can then be applied to any information they may be sharing. For example, consider a scenario in which Bob is using a BCI to interact with his mobile phone. He is calling Charlie but does not want the call logged in the mobile phone’s memory. First, he thinks of Charlie and the mobile phone retrieves Charlie’s number. Then Bob thinks of not logging the call and the mobile phone saves this privacy requirement in its working memory. Finally, Bob thinks ‘call’ and the mobile phone places the call without logging it.

This scenario is a binary situation: log the call or don’t log the call. However, privacy requirements are much more diverse than this. If this conceptual model can be refined to support a diversity of privacy requirements, a technical prototype will be designed, implemented and tested.

The full paper will further conceptualise privacy with a view to informing a future prototype. Then the paper will describe the technologies underlying BCIs. These conceptual and technical descriptions will enable the proposition of technical conceptual model for the prototype which offers flexibility with respect to privacy and interoperability with respect to existing BCIs. Conclusions will stimulate consideration, discussion and critique.


ALLISON, B. Z., BRUNNER, C., KAISER, V., MULLER-PUTZ, G. R., NEUPER, C. & PFURTSCHELLER, G. 2010. Toward a hybrid brain-computer interface based on imagined movement and visual attention. Journal of Neural Engineering, 7, 026007.

BERGER, T., CHAPIN, J., GERHARDT, G., MCFARLAND, D., PRINCIPE, J., SOUSSOU, W., TAYLOR, D. & TRESCO, P. 2007. International Assessment of Research and Development in Brain-Computer Interfaces.

BURKERT, H. 1997. Privacy-Enhancing Technologies: typology, critique, vision. Technology and privacy: the new landscape.

CAMPBELL, A., CHOUDHURY, T., HU, S., LU, H., MUKERJEE, M., RABBI, M. & RAIZADA, R. NeuroPhone: Brain-Mobile Phone Interface using a Wireless EEG Headset. MobiHeld 2010, 2010. ACM.

COFFEY, E., BROUWER, A.-M., WILSCHUT, E. & VAN ERP, J. 2010. Brain-machine interfaces in space: Using spontaneous rather than intentionally generated brain signals. Acta Astronautica, 67, 1-11.

DRUMMOND, K. 2009. Pentagon Preps Soldier Telepathy Push. Wired.

EMOTIV SYSTEMS. Emotiv – Brain Computer Interface Technology [Online]. Available:

FUSTER, G. 2009. Inaccuracy as a privacy-enhancing tool. Ethics and Information Technology.

GAVISON, R. 1980. Privacy and the Limits of Law. The Yale Law Journal, 89, 421-471.

HOCHBERG, L., SERRUYA, M., FRIEHS, G., MUKAND, J., SALEH, M., CAPLAN, A., BRANNER, A., CHEN, D., PENN, R. & DONOGHUE, J. 2006. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442, 164-171.

INTENDIX Personal EEG-based Spelling System.

LENHART, A. & MADDEN, M. 2007. Teens, Privacy and Online Social Networks: How teens manage their online identities and personal information in the age of MySpace.

SHACHTMAN, N. 2008. Army Funds ‘Synthetic Telepathy’ Research. Wired.

SOLOVE, D. 2006. A Taxonomy of Privacy. University of Pennsylvania Law Review, 154, 477-560.

WARWICK, K. & GASSON, M. Extending the human nervous system through internet implants – experimentation and impact. IEEE International Conference on Systems, Man and Cybernetics, 2004 The Hague, Netherlands. 2046-2052.

WESTIN, A. 2003. Social and Political Dimensions of Privacy. Journal of Social Issues, 59, 431-453.

Lurking: finding one’s self while remaining hidden

Richard Volkman


“Lurking” refers to the online behavior of gathering information from interactive resources like forums and social networking sites without participating in the interactivity that generates the online content and without disclosing one’s self as a consumer of that content. I propose to explore the ethical significance of lurking as it specifically relates to social networking sites like Facebook. This focus will reveal the deep relativity of the ethics of lurking to its particular context and motivations. In the context of social networking sites like Facebook, lurking and its compliment in the public disclosure of personal information are expressions of the particular characters of particular social animals situated in a particular social circumstance. Phenomena like lurking indicate the ethical significance of deliberations that cannot be readily undertaken in the language of impersonalist ethics, since lurking takes place at the vague and overlapping boundaries of self and other, friend and stranger, this community and that community. To deliberate effectively about such matters we do better to operate in the personalist language of character, self-discovery, and self-realization than the impersonalist discourse of overall consequences, moral rules, and political rights that characterizes so much of academic philosophy in ethics.

By “personalist” discourse, I mean ethical discourse that admits the relevance of particular persons and circumstances in properly judging what there is most reason to do or want or be. Aristotle’s virtue ethics or the heroic individualism of Emerson and Nietzsche are personalist in this sense. “Impersonalist” accounts of ethics strive for a level of impartiality and universality, whether in the form of some agent-neutral specification of the “overall good” in consequentialism, or in the specification of universal rules to govern conduct for “persons as such” in the deontology of Kant’s moral theory or Locke’s account of natural rights. The point of the essay is not to establish the absolute superiority of one or the other of these modes of discourse but to indicate the extent to which one’s account of the ethical universe will be impoverished if one altogether eschews personalist ethical discourse on the grounds that it is insufficiently impartial or universal in its application.

While there is a considerable literature on the ethics and motivations of lurking, there is relatively little investigation of it in the context of social networking and even less that reflects on the ultimate particularity of good judgement in that domain. There are many articles on the ethics of lurking in the context of formal research such as ethnography, and there are several articles examining the values of lurking versus more active participation in online learning communities. These are appropriate issues for academics to study, since they directly touch the pedagogical and policy concerns that shape our vocation, but we generally do not evaluate our everyday conduct in the terms of policy or professionalism. Meanwhile, treatments of lurking that focus on the free riding it may entail are appropriate to the investigation of the flourishing and success of online collaborative projects like Wikipedia or Slashdot or Reddit, but they do not seem particularly relevant to the motives or the consequences of lurking on social networking sites. In fact, one of the purposes that people bring to such sites is to disclose information about themselves, and this purpose would often be frustrated or at least undermined if there were no lurkers. This indicates a complimentary relation between “lurkers” and “disclosers” that may be present in other venues but which I will argue is an essential feature of social networking. Finally, analysis that focuses on hacking, violating terms of service, or other straightforward breeches of contract or policy is not directly relevant, since these issues are adequately addressed in impersonalist modes of discourse. It will be shown that lurking on social networking sites has obvious and profound privacy implications, but they are not the sort that can be neatly captured by an impersonalist discussion of privacy or the policies and social norms that attend it.

To orient the discussion, it is helpful to describe concrete cases in which lurking on a social networking site has plain ethical significance that cannot be easily captured without explicit reference to the particular person embedded in her particular circumstance and relations. This is a daunting task that cannot be neatly summed in an abstract, since it cannot proceed in the usual manner of “situated action ethics,” such that the story pumps an obvious intuition that is leveraged as evidence for or against some account of the rules or policies appropriate to the case. The point is that the particular facts matter, and this cannot be illustrated by leaving out the particular facts; nor is it possible to tell a story short of a novel that would include all those relevant facts. Instead, the cases will illustrate how many salient details are invariably left out of any such description, while also showing the case is an instance of real ethical significance and not a matter of mere taste or whim. The paper will proceed from a detailed investigation of such particular cases to illustrate that ethics is not simply a matter of evaluating actions or rules. One’s situated character matters.

In the end, it will be revealed that lurking on social networking sites cannot be categorically judged to be good or bad and that crude political tools for balancing privacy and the flow of information shed little light on the phenomena. To engage these matters, we need to engage the concrete realities of our particular lives in their full particularity. Ultimately, if we are to navigate the ethics of real life in the information age, we shall each need to engage our selves.

The Global Communication skill among educated youth of the Tharu tribe: A study in special reference to uses of internet-facilities

Subhash Chandra Verma


The Tharu community is famous tribe of India and Nepal. The Tharus are indigenous people of the Himalayan Tarai area. Maximum population of this community lives on both sides of Indo-Nepal border. Tharu were already living in the Terai before Indo-Europeans arrived. Due to friendly relations between India and Nepal, the Indo-Nepal border is open for people of both countries; so Indian and Nepali Tharus are active in their socio-cultural relationship. This paper is based on primary & secondary data and it describes status of global communication & connectivity among educated youth of the Tharu tribe. There were 32 male and 18 female students in the selected samples. All these Tharu students belong to various villages. A self developed questionnaire has been used in interviewing all selected Tharu students for collection of information about awareness of global communication. Facts about use of internet, Social Networking, Chatting tools, developing of online communities have been collected from the internet. Facebook, Orkut, Yahoo Messenger, MSN Messenger, Skype, Google search have been used for searching those Tharus who are connected and active globally through the use of internet. Some information has been searched from the internet about the Nepali and Indian Tharus. Facts about use of internet, Social Networking, Chatting tools, developing of online communities have been collected from the internet. Facebook, Orkut, Yahoo Messenger, MSN Messenger, Skype, Google search have been used for searching those Tharus who are connected and active globally through the use of internet. It is a normal perception about the Tharus that they have very reserved and shy nature so they are backward. It has also been found in this study that they are really very poor in global connectivity due to their traditional habits. This situation is only in Indian Tharus because Nepali Tharus are more aware than Indian Tharus in use of internet and direct links to other people. This is why they are working at global level but Indian Tharus are still struggling for their basic needs in this era of globalization. Educated Indian Tharus are also backward and poor in use of global communication facilities like internet. Maximum (approx 75%) Indian Tharu students are not able to use computer and internet till now, though they know very well that use of modern technologies of communication is very helpful in development of any society. Hence they are always away from these facilities which are available in the college and market at very low price. There is need for more awareness about global communication and connectivity among Indian Tharus for their development. The Indian Tharu youth, who have access to higher education, are not so aware about globalization and global communication. Although they are aware of the significance of global communication in the development of any community in this era, but even then they are not active in global communication. There is no dearth of facilities which are available free of cost (at college) or at very low price in the market for global communication but Indian educated Tharu youths seem to have little interest in it. Nepali Tharu youths are more active than Indian Tharu youths in global communication by internet and direct contacts with people of other countries. There are many Nepali Tharu students studying in top grade Indian Institutes. But Indian Tharu youths have little awareness about studying in these institutes, though they have special facility of reservation for admission in these types of institutions. What is the real status of the global communication and what are the main problems of Indian Tharu youth about this matter? Why they are not interested in global communication? These are some main and big questions at present. On the basis of this analysis, of collected data from the Indian Tharu students and other available information by related literature & internet search, it should be said that Indian Tharu community is still poor and deprived in matter of global communication in this era of globalization. Lack of awareness about development and globalization is the reason of their backwardness in global communication. Due to poor English some Indian Tharus feel shyness and hesitation to keep global contacts by internet or directly. Educated Indian Tharus are also poor and slow in global communication due to their tipical traditional habits of hesitation and shyness. That is why the Indian Tharus have only one online community named as Rana Tharu Parishad but Nepali Tharus have lot of online communities for social networking (name of these online communities have also described above). Maximum educated Indian Tharus (3/4) are not able to use computer and internet till now. This is the era of globalization so the global communication is must for development of every community. That’s why the Indian Tharus need to be connected with global communication stream. Tharu youth are very important wing of their community. They are playing very creative role in their community. But they are not connected with mainstream of development. Some youths are trying to get higher education and advanced technology but they are very few. They are neither advanced nor are intricately linked with their traditional culture. They should have access to modern education, communication, technology and new life style but the care of traditional culture is necessary to keep their own identity.

Web Deception Detanglement

Anna Vartapetiance and Lee Gillam


Suppose we wished to create an intelligent machine, and the web was the choice of information. More specifically, suppose we relied on Wikipedia as a resource from which this intelligent machine would derive its knowledge base. Any acts of Wikipedia vandalism would then impact upon the knowledge base of this intelligent system, and the system might develop confidence in entirely incorrect information. Ethically, should we develop a machine which can craft its own knowledge base without reference to the veracity of the material it considers? If we did, what kinds of “beliefs” might such a machine start to encompass? How do we address the veracity of such materials so that such a learning machine might distinguish between truth and lie, and what kinds of conclusions might be derived about our world as a consequence? If trying to construct an ethical machine, how appropriate can ethical outcomes be considered in the presence of deceptive data? And, finally, how much of the Web is deceptive?

In this paper, we will investigate the nature and, importantly the detectability, of deception at large, and in relation to the web. Deception appears to be increasingly prevalent in society, whether deliberate, accidental, or simply ill-informed. Examples of deception are readily available, from individuals deceiving potential partners on dating websites, to surveys which make headlines about “Coffee causing Hallucinations” with no medical evidence and very little scientific rigour , . to companies which collapsed due to deceptive financial practices (e.g. Enron, WorldCom), and segments of the financial industry allegedly misrepresenting risk in order to derive substantial profits . We envisage a Web Filter which could be used equally well as an assistive service for human readers, and as a mechanism within a system that learns from the web.

So-called Deception Theory, and the possibility to model human deception processes, is interesting to experts in different subject fields for differing reasons and with different foci. Most research has been directed towards human physical reactions in relation to co-located (face-to-face, synchronous) deception, largely considering non-verbal cues involving body language, eye movements, vocal pitch and so on, and how to detect deceptions on the basis of such cues. Such research is interesting for sociologists in terms of how deception is created, criminologists in trying to differentiate the deceptive from the truthful, and computer vision researchers in relating identifying such cues automatically across participants within captured video. Alternative communication mediums, in which participants are distributed, communications asynchronous, and cues can only be captured from the artefact of the communication, the verbal, requires entirely different lines of expertise and investigation.

To try to recognize deception in verbal communication, lexical and grammatical analysis is typical. Such approaches may be suitable for identifying deception on the web. It is assumed that deception leads to identifiable, yet unconscious, lexical selection and the forming of certain grammatical structures (Toma and Hancock, 2010), and these may act as generally useful cues for deceptive writing. From numerous researchers (Burgoon et al 2003, Pennebaker et al 2003, Newman et al 2003, Zhou et al 2003), we find that the presence of such cues can be divided into four principal groups:

1. Use of more negative emotion words

2. Use distancing strategies – fewer self references

3. Use of larger proportions of rare and/or long words

4. Use of more emotion words

To demonstrate that deception detection might be possible, Pennebaker and colleagues developed a text analysis program called Linguistic Inquiry and Word Count (LIWC) which analyzes texts against an internal dictionary (Pennebaker, Francis, & Booth, 2001, Pennebaker, Booth, & Francis, 2007). Each word in the text can belong to one or more of LIWC’s 70 dimensions, which include general text measures (e.g. word count); psychological indicators (e.g. emotions), and semantically-related words (e.g. temporally and spatially related words). We submitted the BBC’s “’Visions link’ to coffee intake” article, alluded to earlier, to the free online version of LIWC. Results, included below, show an absence of self-references, few positive emotions, and a large proportion of “big words”. Of course, such an analysis is inconclusive as such features may also be true of scientific articles, or textbooks, and LIWC leaves the interpretation up to us.
Our aim is to create a system that can identify deceptive texts, but also explains which are the most deceptive sentences. Such a system could act as an effective Web Filter for both human and machine use. We assume that such a system should be geared towards the peaks of deception which may occur in texts which are otherwise not deceptive, so such deceptions may be lost in the aggregate. We must also account for systematic variations as exist in different text genres in order to control for them, and to ascertain the threshold values for the various factors which give us appropriate confidence in our identification.

The full paper will initially review the literature relating to deception in general, and distinguish between deceptions and lies. In the process, we will offer up some interesting – and occasionally amusing – examples of deception. We will then focus towards text-based deception, and we will include discuss initial experiments geared towards the development of the system mentioned above. One of these experiments may even demonstrate how a paper supposedly geared towards deception detection, whose conclusions fail to fit the aim, was never likely to achieve its aim, and make mention of one or two other interesting examples of academic deception and/or lies.


Toma, C.L. and Hancock, J.T. (2010). Reading between the Lines: Linguistic Cues to Deception in Online Dating Profiles. Proceedings of the ACM conference on Computer-Supported Cooperative Work (CSCW 2009)

Burgoon, J.K., Blair, J.P., Qin, T., and Nunamaker, J.F. (2003). Detecting deception through linguistic analysis. Intelligence and Security Informatics, 2665.

Pennebaker, J.W., Booth, R.J., & Francis, M.E. (2007). “Linguistic Inquiry and Word Count: LIWC 2007”. Austin, TX: LIWC (

Pennebaker, J.W., Francis, M.E., and Booth, R.J. (2001). “Linguistic Inquiry and Word Count: LIWC 2001”. Mahwah, NJ: Erlbaum Publishers.

How to Address Ethics of Emerging ICTs: A critique of Human Research Ethics Reviews and the Search for Alternative Ethical Approaches and Governance Models

Bernd Carsten Stahl


The purpose of this paper is to explore how ethical issues arising from emerging technologies can currently be addressed using the mechanism of ethical review, which dominates the approach to ethics on the European level. The paper discusses which blind spots arise due to this approach and ends with a discussion of alternative and complementary approaches.

The paper arises from the EU FP7 project Ethical Issues of Emerging ICT Applications (ETICA). ETICA ran from 04/2009 to 05/2011 ( Its main focus was on exploring which emerging ICTs can be reasonably expected to become relevant in the next 10 to 15 years, to explore their ethical consequences and propose ways of addressing these. The current abstract gives a brief review of the approach and findings of the project and then concentrates on the way ethical issues are currently addressed in technical research in the EU, namely by ethics review. The abstract argues that this approach will be incapable of dealing with a significant number of ethical issues identified by ETICA and it will discuss other ways of doing so. The rest of the abstract develops this argument in some more depth.

In order to assess whether processes of ethics governance will be fit for purpose, the first task is to come to a sound understanding of which technologies are likely to emerge. The methodology employed to identify emerging ICTs was a structured discourse analysis of documents containing visions of future technologies. Two types of documents were analysed: 1) high level governmental and international policy and funding documents and 2) documents by research institutions.

The grid of analysis used to explore these documents is shown in the following figure:
Data analysis found more than 100 technologies, 70 application examples and 40 artefacts . These were synthesised into the following list of emerging ICTs:

  • Affective Computing
  • Ambient Intelligence
  • Artificial Intelligence
  • Bioelectronics
  • Cloud Computing
  • Future Internet
  • Human-machine symbiosis
  • Neuroelectronics
  • Quantum Computing
  • Robotics
  • Virtual / Augmented Reality

By “technology” we mean a high-level socio-technical system that has the potential to significantly affect the way humans interact with the world.

Having identified likely emerging ICTs, the next task was to explore which ethical issues these are likely to raise. In order to identify likely ethical issues of emerging ICTs, a literature analysis of the ICT ethics literature from 2003 was undertaken. This started out with a novel bibliometric approach that mapped the proximity of different concepts in the ICT ethics literature. The following figure is a graphical representation of this bibliometric analysis:
Using this bibliometric analysis as a starting point, a comprehensive analysis of the ICT ethics literature was undertaken for each technology.

The following mind map represents the headings of the ethical issues identified for the different technologies:
Figure 3: Ethical issues of emerging ICTS

The ethical analysis showed that there are numerous ethical issues that are discussed with regards to the technologies. The number and detail of these ethical issues varies greatly. This variation is caused by the differing levels of detail and length of time of discussion of the technologies. Several recurring issues arise, notably those related to:

• privacy,

• data protection,

• intellectual property,

• security.

In addition to these, there were numerous ethical issues that are less obvious and currently not regulated. These include:

• autonomy, freedom, agency,

• possibility of persuasion or coercion,

• responsibility, liability,

• the possibility of machine ethics

• access, digital divides

• power issues

• consequences of technology for our view of humans

• conceptual issues (e.g. notions of emotions, intelligence),

• link between and integration of ethics into law,

• culturally different perceptions of ethics.

This non-comprehensive list shows that there are numerous ethical issues we can expect to arise.

In order to motivate policy development, the relevance and severity of these issues were evaluated. Evaluation of the emerging ICTs and their ethical issues was done from four different perspectives:

• Law:

The analysis was based on the principles of human dignity, equality and the rule of law. A review of 182 EU legal documents revealed that the legal implications of emerging technologies were not adequately reflected.

• (Institutional) ethics:

The earlier ethical analysis was contrasted by looking at opinions and publications of European and national ethics panels or review bodies. The review furthermore covered the implied normative basis of technology ethics in the EU.

• Gender:

A review of the gender and technology literature showed that in the case of five technologies such gender implications had already been raised in the literature.

• Technology assessment:

This analysis asked how far developed the ICTs are and what their prospects of realisation are. The expected benefits and possible side effects were discussed as well as the likelihood of controversy arising from the different technologies.

This literature-based analysis was supplemented and validated by an expert evaluation workshop. The evaluation found that several of the technologies are so closely related that they should be treated in conjunction. Building on the criteria of likelihood of coming into existence and raising ethical debate, the following ranking was suggested:

1. Ambient Intelligence

2. Augmented and virtual reality

3. Future Internet

4. Robotics and Artificial Intelligence and Affective computing

5. Neuroelectronics and Bioelectronics and Human-Machine Symbiosis

6. Cloud Computing

7. Quantum Computing

This ranking will allow for the prioritisation of activities and policies.

Ethics is described as an important part of technical research in the EU. The European Union is based on shared values as laid out in the Charter of Fundamental Rights of the European Union and implemented in the Europe 2020 and other strategies and policies. Information and communication technologies (ICTs) are needed to achieve numerous policy objectives.

It is therefore important to ensure that development and use of ICTs lead to consequences that are compatible with European values. ICT research projects funded by the EU 7th Framework Programme have to reflect on ethical issues and how to resolve them. This is currently verified by an ethics checklist. This checklist is filled in by project proposers and evaluated by technical experts during the evaluation of the project. If these experts flag the project up as ethically relevant, then it is reviewed by an ethics review panel.

In order to understand whether this approach is suitable for dealing with the ethical issues, the issues were classified as follows:
Figure 4: Top level categories of ethical issues in emerging ICTs.

A more detailed analysis of the last two, social consequences and impact on individuals can be seen in the following figure:
Figure 5: Categories of ethical issues related to impact on individuals and social consequences

The colour coding in this figure refers to the question whether existing ethics processes are likely to pick up these issues and deal with them in a satisfactory manner. Green means that these are established issues that the ethics checklist and subsequent review are likely to identify. Issues depicted in yellow are less clear and red issues are unlikely to be addressed by the current approach.

Having thus established that the EU’s current way of dealing with ethical issues is unlikely to be able to satisfactorily deal with all problems and does not measure up to the EU’s rhetoric, the next question is how this can be addressed. The full paper will go through the computer ethics literature and explore whether alternative approaches offer more promising avenues of addressing these issues.

Overall, the paper will contribute to a theoretically sound and practically relevant way of understanding, evaluating and dealing with ethics in emerging technologies.

Eeny, Meeny, Miny, Masquerade! Advergames and Dutch Children; A Controversial Marketing Practice.

Isolde Sprenkels and Dr. Irma van der Ploeg


In a society increasingly inundated with digital technology, children in the Netherlands learn from a very young age how to use new information and communication technologies (ICTs). These technologies offer them ways to play, learn, explore and develop their sense of identity, as well as interact and communicate with adults and peers. Children spend ever more time in front of computer and mobile screens with gaming as one of their favourite activities. One type of game many children enjoy playing are online casual or mini games. These short, ‘free’ and easy to learn games have friendly designs with bright colours and fun tasks to perform, and are developed to entertain, educate or deliver a particular commercial message. This paper focuses on the latter ‘advertisement as game’ that is developed around a particular brand or product and which can be described as an ‘advergame’.

Advergames are used by companies to build brand awareness, prolong contact time, stimulate product purchase and consumption, drive traffic to a brand’s website, generate consumer data and build and expand digital profiles of consumers. Especially when played by children, these advergames can be considered to be problematic and controversial, as they are seen to exploit children by taking advantage of their state of psycho-social development and by integrating unseen technological features. They bring together several issues related to identity, consumption, marketing, profiling and datamining. Using insights from surveillance studies, science and technology studies, (sociological) studies of identity construction in relation to ICTs, and studies on children and consumption, this paper will analyse several advergames targeted to Dutch children. It examines how this new form of marketing communication fits into corporate objectives and why this can be considered controversial with children.

First, advergames will be examined against a discourse that suggest that it is immoral to economically exploit children; that children are considered vulnerable and it is inappropriate to take advantage of this vulnerability by using sophisticated marketing strategies on them. As children’s cognitive skills are not yet fully developed and they have little life experience, their ability to interpret and assess commercial messages is limited. This makes persuasive strategies unethical as children are still in the process of distinguish messages and unable to make choices that would protect themselves from certain forms of marketing manipulation (Moore 2004; see also Buijzen and Valkenburg 2003).Research has shown that children find it difficult to distinguish between advertising and editorial content in online environments (Nielsen 2002; Mijn Kind Online 2008). There is also an increasing lack of parental supervision in children’s use of the internet (Qrius 2007). This implies that many children are on their own when it comes to identifying commercial content online and developing digital information skills. Codes of conduct such as the Dutch Advertising Code prescribes that the distinction between advertising and editorial content should always be made recognizable. However, when it comes to advergames, this distinction is not made explicit in any way, making it a very difficult task for children to discriminate between an advertisement and entertainment in these ‘seamless environments’ (Moore 2004).

Arguably, this is part of a marketing strategy. Eliminating the recognition or identification of the commercial message and marketer practitioners’ intentions and tactics fits the strategy of ‘kidsmarketing’ to tailor messages, design products, packages, websites and advertisements in a way that appeals to childrens’ ‘wants and needs’, and are identifiable to them, with ‘play and fun’ at its core (Cook 2010). Advergames appear to be the ultimate form of this ‘play and fun’ approach; a ‘masquerade’, where marketer practitioners hide behind a screen full of play and fun, allowing them to reach their own commercial goals in the meantime. More specifically, while advergames may be seen as an opportunity to play something fun for free, children remain unaware of the commercial intent and manipulation behind the (adver)game that can be seen to mediate and even transform their play, their sense of self and their understanding of the world around them. Not only they are offered what they ‘want and need’ following the viewpoint of the marketer practitioner, what they ‘want and need’ appears to be produced by this very same strategy.

Second, in order to reach corporate goals such as building brand awareness, stimulating consumption, and generating consumer data, certain features are designed into advergames and will be taken into account. A study on children and advergames shows that many of these advergames include features to encourage children in repeat play and product purchase by offering such things as multiple game levels, public displays of high scores and game tips within product packages (Moore 2006). Another study indicated that there is a relationship between the capacity of the advergame to induce a state of flow, a mental state of subjective absorption within an activity, and a change in the buying behaviour of (in this specific case adult) players (Gurau 2008). Advergame research also shows how some of these games include product related polls or quizzes, offering valuable information for market research on children’s habits and preferences (Moore 2006; Grimes 2008). They may also encourage players to register and share their gaming experience with friends or family, collecting personal identifiable information (Gurau 2008). Combined with an analysis of in-game-behaviour and activities, marketers are able to construct detailed consumer profiles, based on the aggregation of these behavioural and demographic data (Grimes 2008; Chung & Grimes 2005). Through this, advergames can be described as ‘electronic surveillance devices’, as they enable a new form of tracking children’s activities. In addition, studies on online communities for children and advertising discuss marketers using immersive advertising campaigns such as advergames, encouraging children to play with particular products, enabling them at a later point in time to identify the brand (Grimes & Shade 2005), and to create a ‘personal relationship’ with the product (Steeves 2006). They teach children to trust brands, consider them their friends, not only recommending products, but becoming ‘role models for the child to emulate, in effect embedding the product right into a child’s identity’ (Steeves 2006).


Chung, G. & Grimes, S. (2005) ‘Data Mining the Kids: Surveillance and Market Research Strategies in Children’s Online Games’, Canadian Journal of Communication, vol. 30, no.4, pp. 527-548.