The “Human-Machine” Schism in STS with Implications for Software Accountability, Control and Design

AUTHOR
Stephen J. Lilley

ABSTRACT

The debate in science and technology studies over whether or not humans and machines are essentially different is described here. Conceptualizing humans and machines symmetrically or asymmetrically is described in this paper as being a crucial theoretical divide which is at the heart of disagreements over 1) who or what is accountable when software fails, 2) whether human oversight over software applications is necessary, and 3) whether to encourage or discourage attempts to incorporate values in the design of software.

Is it better to consider human and machines as distinct entities having different properties, or should such divisions be done away with altogether? Harry Collins et al present the strongest case for human-machine asymmetry, suggesting that machines can only behave, whereas humans are capable of intentional acts, some of which are standardized but others which are socially nuanced and require immersion in a culture. As long as machines remain independent entities, in the sense of not being socialized into human culture, the divide between humans and machines will continue.

Writers often identified under the rubric of actor-network theory (for example, Bruno Latour, Michel Callon, and John Law) have been most critical of Collins’s distinctions between humans and machines, and they have developed an alternative approach which denies inherent differences. They suggest that the qualities that human and machines take on vary in terms of the context or the specific configuration of relations.

Actor-network theorists are more likely to see software failure as resulting from failure or breakdown of a much larger network of heterogeneous agents (e.g., technicians, salespeople, non-digital machine parts, routines, regulators, etc.). What this suggests is distributive accountability. Collins et al, on the other hand, insists that humans are moral agents and machines are not, and this is why humans are often (and should be) held more accountable.

In regard to human oversight over software, Collins advances a meta-view of humans and intelligent machines in which the latter are deemed important constituents in the social body, but, lacking social fluency, are not conversant participants, let alone active promoters of this body. He criticizes confusion on this point and the tendency to anthropomorphize machines and overlook the a-social character of even the most advanced artificial intelligence programs. In the end, Collins feels that humans must supervise smart machines if the latter are to play any constructive role in the social body.

Actor-network theorists argue that if machines have been placed in a subservient capacity, it is due to framing with a human bias. New potentials may emerge without such framing. Also these writers criticize efforts to incorporate values in the very design of software and other technologies. Even if values could be incorporated in the design phase, the tendencies of drift and overflowing, and the unforeseen complexities of interrelations would certainly defeat the intentions of the designers. In contrast, Collins believes that human oversight of technology is inevitable, and he advocates a more democratic approach starting from the very beginning phases of design. Assuming an indeterminancy to technology, he argues, invites passivity, fatalism, or, at the very least, allows for the de facto control over the shaping of technology by dominant groups. An approach which, instead, recognizes human imprint on technologies serves as a better framework for the building of technologies resonant with humanistic principles

Although the work of Collins et al is innovative, there is a certain amount of familiarity in its humanism ( i.e., human actions, human choice, and consequences for humans are primary concerns). Actor-network theory, whether by its symmetrical treatment of humans and machines or its emphasis on process and complex hybrids, represents a radical departure from humanism. In evaluating these diverse approaches, computer ethicists also will have to evaluate the importance of humanism to their own work.