Interactive to proactive: Computer Ethics in the past and the future

Leoni Venter, MS Olivier and JJ Britz


The integration of mobile technology, wireless networks, ubiquitous computing and artificial intelligence with thousands of embedded devices such as sensors and actuators may result in networks that can proactively monitor and respond to human behaviour without human interaction or supervision. Decisions that can influence or alter the environment will be made at faster-than-human speeds. This technology could have some very positive uses to enhance human life, but it can also be misunderstood or be misused, creating some ethical issues that need to be addressed.

Computer systems as we know them are designed to be interactive with their human users. Personal computers with graphical user interfaces spend most of their CPU time idling as they wait for the user to initiate and action or respond to a computer event. These systems are configured by human users and are maintained and utilized by human users.

With the proliferation of small computing devices such as cellular phones, personal digital assistants (PDA’s), smart cards and other devices, people are carrying on their person many devices capable of communicating with other devices over wireless networks. Further development of small processing chips allows embedding of computing devices into almost anything. Smart clothes, wearable computers and the like will make ubiquitous computing a reality, with a person interacting with hundreds or even thousands of devices embedded in his/her environment.

Interaction on a one-to-one basis with devices on this scale is humanly impossible. A single user cannot possibly configure and maintain every embedded device manually. The proffered ideal situation would be that the devices configure themselves and decide for themselves what actions should be taken given a certain situation.

This leads to the concept of proactive computing (Tennenhouse, 2000). Proactive computing envisions networks of computing devices, sensors and actuators that dynamically configure and maintain themselves, monitor the environment and respond to or even adapt to the environment. They may even change the environment itself. These networks of devices will operate without human supervision. As such they will no longer be interactive and can therefore operate at faster-than-human speeds.

Current Interactive computers are normally used for personal reasons such as communication, making money, doing work etc. Proactive systems, on the other hand will be concerned with the environment of their users. Such systems would have the ability to influence the environment in such a way that it could help or hurt users.

Proactive computing is often touted as a technological advance that will benefit the wellbeing of people (Noury, 2003). If this is the case, it shares a purpose with medical science. If one accepts the purpose or ????? as the essence of ethics, the well-known clinical ethical principles, viz. autonomy, beneficence, non-maleficence and justice might also apply to proactive computing. However, using teleological reasoning to establish this relationship is insufficient. If Aristotelian ethics is applied to equipment, it implies that virtues also have to be ascribed to such equipment. Our paper argues that this potentially contentious notion indeed has merit, but it draws much wider parallels between proactive computing and clinical care. These parallels include the balance between the ability to do good with the potential to harm. It will argue that the individual is vulnerable (in addition to the expectation of benefiting) from both cases. The remainder of the paper then proceeds, accepting the validity of transferring the clinical ethics principles to proactive computing.If proactive systems can influence the wellbeing of users, then it has something in common with medical science, which is also concerned with the wellbeing of patients. It is therefore interesting to evaluate proactive systems in terms of clinical ethics, with regards to autonomy, beneficence, non-maleficence and justice.

In a nutshell, the principles cover the following. Autonomy refers to the right of the user to decide what should happen as result of a given situation. Beneficence is the expectation that the use of the computer will be for doing good; non-maleficence is the expectation that the computer will not be used with bad intent; justice is the expectation that the use of the computer will be fair.

Our paper argues that these principles are all necessary, in the sense that absence of any one of the principles may lead to abuses. This is demonstrated with suitable examples.

This leads to an obvious dilemma: If (individual) autonomy is a necessary principle and proactive computing makes decisions at faster-than-human speeds, proactive computing seems to be inherently unethical. We address this by revisiting the requirement of autonomy. In revisiting autonomy, we accept Lyotard’s (1991:2) concern: “what if what is ‘proper’ to humankind were to be inhabited by the inhuman?” We conclude that proactive computing can only be considered ethically acceptable when it does not transgress this boundary. However, we also demonstrate that this boundary is fuzzy, and hard to characterise in practise.

This problem forms part of a larger problem that is discussed in the final part of our paper: We argue that clinical ethics principles have practical value (even though they are not perfect) because they have been institutionalised by the community at large and have been internalised by practitioners. Examples of such ‘institutions’ include professional bodies, clinical review boards, and government agencies. Our argument that these principles have been internalised is more empirical, and rests on anecdotal evidence. However, when the field of computing is concerned, it is simple to demonstrate a lack of institutionalisation and internalisation of such ethical principles. In fact, sufficient examples can be cited that indicate a resistance from the community at large to institutionalising such ethical principles in the field of computing.

We therefore conclude that, while well-known principles exist for the application of proactive computing, a significant change in the community is required before such principles can be applied in any practical sense.

When we look at proactive computing we discover that although its beneficent, non-maleficent and just use can be envisioned, autonomy for the user is not possible, since the systems will be operating without human supervision.

In the medical world there are many professional societies, governing bodies and control councils to ensure that medical practitioners and researchers act in an ethical manner. New drugs must be approved and will only be released on the market after thorough testing to ensure that they will do no harm.

Despite many years of discussion and debate about computer ethics (Bynum, 2001), there is still no standard set of rules or guidelines, and certainly no easy answers to solve the ethical issues posed by normal computer usage. Proactive systems will only add to these issues, and if there is no ethical guidance in the deployment and operation of these systems, chaos may ensue.

Given the history of “computer ethics” with its lack of clear results, it looks as if the future may be quite complicated.


Lyotard, J. F., The Inhuman: Reflections on time, Stanford University Press, 1992

Noury, N., et al, New trends in health smart homes, ITBM-RBM (RBM), June 2003, vol. 24, no. 3, pp. 122-135(14)

Tennenhouse, D. Proactive Computing. Communications of the ACM, 43(5):43-50, 2000