Responsibility in Software Engineering

AUTHOR

Thomas M. Powers (USA)

ABSTRACT

The literature in business ethics on the topic of individual and corporate responsibility is heavily indebted to H.L.A. Hart’s analysis of the concept of responsibility. That analysis leads to some interesting philosophical problems, e.g., that the corporation may not be responsible per se for the ill it does because it lacks rationality, intentionality, and hence moral capacity. It also leads to some dissatisfying results, e.g, that Ford’s responsibility for the many exploding Pinto automobiles is somehow diminished by additional causal factors in the crashes, beyond the obvious factor of Ford’s design. Ultimately, the only practical suggestion is to reduce the moral question to the legal one, and formulate systems of legal liability to punish, after the fact, those who have been irresponsible . For the businessperson who wishes to act responsibly in releasing a product for sale, Hart’s analysis of responsibility may lead to paralysis; it may transform the moral agent into a mere legal subject.

Private sector software engineers and managers find themselves in a similar fix. Especially for those who produce safety critical software in competitive markets, there is the Scylla of releasing poor quality software too early, and the Charybdis of releasing high quality software too late. [McConnell 1997] While there are technical tools (testing, formal methods), organizational procedures (peer review), and legal maneuvers (disclaimers and licensing agreements) to help in this decision, the ethical models for proactive responsible behavior are often lacking. I will argue that the two primary views of responsibility, stemming from “Hippocratic” and Utilitarian theories, respectively, offer poor conceptions of responsibility. Ultimately I advocate a Kantian approach, supplemented by a game-theoretic conception of ethical codes put forth by the economist Kenneth Arrow. I intend my approach to appeal to people who are inclined toward the deontological view, but to be able to withstand objections from egoists and other consequentialists.

The Hippocratic moral view is essentially conservative and has one strict requirement: do no harm. Formal work in program verification, or proving programs “correct,” is the one great hope for this view of responsibility. Unfortunately, verification is at present impossible for programs of moderate complexity, and even if programs were provably correct in relation to a specification, we have no formal way of knowing that a specification is correct in relation to the world in which it is to operate. [Cantwell Smith, 1985] Hence this absolutist view of responsibility would require that many good and reliable programs not be released. Effectively, the verification method of the Hippocratic view turns the ethical question of responsibility into a computational one, and it is a question which software engineers must ultimately answer in the negative, for now.

The second method, that of the cost-benefit-analysis (CBA), is tied to a Utilitarian conception of responsibility. In particular, CBA becomes the method of choice for questions about acting responsibly from the Market Utilitarian perspective, since it resolves the measurement problem that has plagued Utilitarianism since Bentham’s announcement of the view. For safety-critical software engineering, the Utilitarian view leads to disastrous results, I argue, much like it was used in Ford’s decision to release the Pinto, and could similarly be used to justify the software engineering decisions by AECL in the Therac-25 case. CBA, it seems, can easily become an unreflective, institutionalized decision procedure. Its aptness often turns on luck, market factors (such as time-to-market pressure), and an easy conflation of instrumental and intrinsic values. Probabilities can be introduced in the software engineer’s CBA [Fenton, et al., 2001] but the allure of mathematical rigor is not to the point. For all types of software, there seems to be no connection between the adoption of the Utilitarian view of responsibility and the long-term improvement of software reliability, since market and legal factors play a larger role in the outcome of the CBA than does software quality. Essentially, the CBA method of determining responsibility teaches the software engineer the horrible lesson that the sales, marketing, and legal departments of the corporation are more crucial to success than software engineering.

My approach to the responsibility issue is Kantian, but not thereby antiquated. Neither is it absent in the computing world. It is reflected in recent developments in academic computing circles such as the Sustainable Computing Consortium [www.sustainablecomputing.org] and the Center for Empirically-Based Software Engineering (CeBASE) [www.cebase.org]. The focus of the Kantian account, following the procedure of the categorical imperative, is to think about universally acceptable rules for releasing imperfect software. These rules, at the same time, must never treat harm to individuals in a cavalier manner; they must, in Kantian language, treat end-users never merely as means but always as ends in themselves. [Kant, (1785) 1902] That is what it means for a Kantian to act responsibly. I go further and argue that the Kantian’s procedure for formulating and testing maxims of action mirrors some aspects of the early peer-review process for software design and development.

It follows on this non-consequentialist view that the act of releasing risky software that ends up being de facto harmless is not free from blame. When the probability of software failure drops to zero because the product is replaced by a new version or a competitor, the responsibility question is not thereby resolved. The legal issue may be moot, but the ethical issue is not. By way of concluding, I compare the Kantian account to a Rawlsian account [Rawls, 1971] and to Arrow’s account of social responsibility arising from mutually-beneficial ethical codes in a game-theoretic context. [Arrow, 1951, 1973] I argue that the Kantian account is on good philosophical footing, and that it holds promise for the advancement both of computer science and the global computer industry.

REFERENCES

Arrow, Kenneth J. “Alternative Approaches to the Theory of Choice in Risk-Taking Situations,” Econometrica 1951
__________. “Social Responsibility and Economic Efficiency,” Public Policy 1973

Cantwell Smith, Brian. “Limits of Correctness in Computers,” CSLI 1985

Fenton, Norman, Krause, Paul, and Neil, Martin. “A Probabilistic Model for Software Defect Prediction,” IEEE Transactions in Software Engineering, Sept. 2001
Hart, H.L.A. Punishment and Responsibility. Oxford, 1968

Kant, Immanuel. Grundlegung zur Metaphysik der Sitten (1785) in Kants gesammelte Schriften vol. 2, Berlin 1902

McConnell, Steve. “Gauging Software Readiness with Defect Tracking” IEEE Software, May/June 1997

Rawls, John. A Theory of Justice. Cambridge, MA1971