Ethics in Computer Engineering


Wade L. Robison
Ezra A. Hale Professor in Applied Ethics
Rochester Institute of Technology
Rochester, NY 14623


The intellectual core of engineering is a design solution to a particular kind of problem, and ethical issues are internal to that intellectual core. Teaching ethics to computer engineering students thus requires an understanding only of the intellectual core of engineering itself and thus of how design solutions are themselves ethically loaded.

Ethics is already in engineering and only needs to be revealed. Once revealed as internal to any design solution, ethical issues can become a subject of explicit discussion within the intellectual core of engineering itself.

We might imagine, using the Cartesian conceit, an evil genius of an engineer who makes our lives miserable by designing artifacts which mislead into mistakes even the most intelligent, the most highly trained, and the most highly motivated among us and which, in the most diabolically perverse cases, seduce us into producing the opposite of what we were led to expect by the design.

We are all familiar with such artifacts in our daily lives — stove tops whose design misleads us into turning on a burner other than the one we intend, doors that entice us into pushing when we must pull them open, and so on. We may have perverse engineers in our midst, but, far more likely, we have engineers who do not think through the implications of their design solutions for those who are to use the artifacts created in accordance with them.

Whenever there is a mechanical or technical failure, it is always a question where the responsibility ought to rest. A pilot who takes off on the wrong runway and so causes a catastrophic crash will be exonerated if something was amiss that would lead even an intelligent, well-trained, and highly motivated pilot to turn into the wrong runway. But if nothing was amiss, we must wonder about the pilot. Did the pilot make a stupid mistake? Was the pilot inexperienced — the Kennedy case? Was the pilot not motivated to survive — as suspected in the Egyptian air disaster?

These two variables — the nature of the situation and the nature of the operator — play against each other, with perfections of the one calling into question the other. A foolproof design must meet a superior fool not to work, and the most motivated of the best of the best must be misled by something problematic about the design. The ‘must’ indicates a conceptual necessity.

It can only be an heuristic ideal for engineers to solve a design problem with something so wonderfully perfect that it would take a perfect fool to misuse it. There are too many ways in which we humans can be foolish, ignorant, and unmotivated, and the bar would be too high if engineers had to design to make impossible the consequences of all our human failings. But it is an heuristic ideal — and an ethical one. For failures built into the design solution itself by, say, a perverse engineer or a careless one will have consequences, some no doubt unexpected, some almost certainly harmful.

It is a harm, although generally not a major one, to push against a door to open it when it must be pulled. It can cause grievous harm to turn on the wrong burner of a stove. And errors in computer programs can cause untold disasters — from directing an airplane, dependent upon the program for navigation, into a mountainside, as happened in Columbia, to causing the loss of data for a long-term experiment, to producing blackouts in the electrical grid, with all the attendant costs to industry and individuals, to mistakenly reading a signal as a launch for nuclear retaliation.

Indeed, as the world becomes more and more dependent upon technology and, in particular, upon computer programs and thus upon computer programmers, and as interconnections between various parts of this complex computer intrastructure increase, what could have been minor failings can take on catastrophic proportions. Recognizing that ethical considerations are part of the intellectual core of engineering ought to change the way we teach ethics within computer science and decrease the likelihood of graduating perverse or indifferent engineers.

The phrase ‘teaching ethics to computer students’ implies for some a particular view of the relations between technological and scientific disciplines on the one hand and ethics on the other. It is a view readily found in most introductory engineering books where a chapter or a section on ethics is mandatory, but is at the end of the book or at least in a separate section, easily bypassed, relegated by its placement to a side issue, distinct from the ‘real business’ of engineering. It is a view that leaves opens the all-to-common response, ‘I’m not responsible for what people do with what I design.’ But if an engineer’s design invites harms and it need not, then, indeed, the engineer is responsible. A code is a contingent string, and no code must be written so as to cause harms. It may be written that way, for there are too many untoward variables for anyone to guard against. But it is incumbent upon engineers to do what they can to achieve the heuristic ideal of perfection, and it is incumbent upon us as educators to teach our students that they have an obligation, internal to the intellectual core of their discipline, to strive for that ideal.