Ethics on a Chip? Some General Remarks on DRM, Internet Filtering and Trusted Computing

AUTHOR
Andrea Glorioso

ABSTRACT

In a seminal article published in 2003, Edward Felten polemically compared a mythical system for Digital Rights Management (DRM from now on), that would be able to correctly capture, interpret and enforce all the subtleties of copyright law, to a “judge on a chip”. In so doing, Felten wanted to stress how, given current technology, granting DRM systems – or, one might reason by analogy, automated systems in general – the power to take on-the-fly decisions over complex legal issues (such as whether a particular act, in and by itself constituting a copyright infringement, would fall under the defence afforded by the “fair use” doctrine in the US ) amounted to vesting such systems with the authority of the judicial branch in a democratic society. Even worse, these decisions are almost unavoidably taken ex ante, without the “judge” being able to “listen to the arguments” of each side – such as, in the case considered by Felten, copyright holders and the supposedly infringing user – before reaching a decision.

Analysis on the relationship between automated enforcement systems, such as DRM systems, and “fair use” or similar doctrines/legal provisions has been substantial – but not conclusive, at least for the time being and the foreseeable future. Similar analysis, highlighting the legal problems that arise when applying automated systems to social interactions, have taken place in other fields as well – e.g. when discussing the legal implications of applying automated filters to Internet usage and Web navigation, restricting or penalising the efficiency of access to online resources.

However, beyond the legal implications of using automated systems in the contexts described above – and in others that, for the sake of brevity, will not be discussed now – there is arguably a more subtle point to be considered, which touches upon one of the pillars the legal system, and consequently on how the law copes with automated systems: the ethical implications of using automated systems to regulate social interactions.

This contribution will focus on one specific “ethical implication”: whether, by putting in place automated systems deciding whether a particular human action should be “approved” or not, we are witnessing the birth of “ethics on a chip”, of a mythical “moral judge” that decides ex ante whether that particular human action falls within the acceptable realms of “ethical behaviour” in a given social environment. In short, the question is : “what are the ethics of creating automated ethics”?

More specifically, this contribution will analyse the way in which the usage of three particular technologies, that nowadays are mostly implemented through automated systems, might embed a particular form of ethics and, therefore, impose that ethics to the human beings that use them – often unknowingly or involuntarily: DRM systems, Internet filtering technologies and Trusted Computing.

While several forms of the “ethical implication” introduced here have already been apparent for some time – for example, the way in which DRM systems tend to impose a clear limitation on the acceptable level of “criticism”, when its expression involves a copyright infringement, as might be the case for quotations or some forms of parodies; or how Internet filtering categorises (and blocks) as “pornographic”, “immoral”, “terrorist”, “pedophile”, information and resources that ought to be analysed on a case by case basis – this contributions argues that we need a more coherent analytical approach, at least for the three technologies introduced above. This need is particularly relevant in light of the fact that these technologies are not independent from one another: one might very well imagine DRM systems based on Trusted Computing platforms and able to enforce their “ethic” not only on a local computer used by a single human being, but also on the networks – via application of automated filters – through which several human beings communicate.

While this contribution is of a theoretical nature, and the discussion it wishes to stimulate should arguably be independent of any particular way in which the technologies under examination are or will be implemented – as long as they have a fundamental characteristic that here is considered as problematic, i.e. being automated – practical examples will be used throughout the discussions as a tool and help. In the end, however, to answer the question that is raised in this contribution, it is necessary to go beyond a detailed analysis of any particular technology, and pose the more fundamental question of whether “ethics without choice”, or more precisely an ethics where human choice is severely limited by the architecture of an automated system, still deserves to be called “ethics”.

[1] See E. Felten, A skeptical view of DRM and fair use, in 48(4) Communications of the ACM, ACM, 2003: pp. 56-59.

[2] Various definitions of Digital Rights Management systems have been used throughout time, often hindering a clear and coherent discussion on the pros and cons of the “DRM approach” as a whole. In this contribution I will refer mainly to the so-called “NIST definition”, according to which DRM systems are “systems of information technology (IT) components and services along with corresponding law, policies and business models which strive to distribute and control intellectual property (IP) and its rights” (G.E. LYON, The Internet Marketplace and Digital Rights Management, NIST Software Diagnostic and Conformance Testing Division – 897, paper presented at Conference on Infrastructure for e-Business, e-Education and e-Science on the Internet (6-12 August 2001, L’Aquila, Italy), 2001.

[3] “A DRM system that gets all fair use judgements right would in effect be a “judge on a chip” predicting with high accuracy how a real judge would decide a lawsuit challenging a particular use. Clearly, this is infeasible with today’s technology. […] If our technologies can’t make the fair use judgement correctly in every case, perhaps they can get it right most of the time. Perhaps they can enforce some approximation of the law. The challenge in doing this lies again in the difficulty of internalizing the four-factor fair use test so a program can evaluate it. The true result of the test relies on economic analysis and on factors outside the computer that are not easily measured (such as the social context in which a use occurs). In some respects, the fair use test seems designed to frustrate attempts to computerize it” (E. Felten, A skeptical view of DRM and fair use, p. 58, supra at n. 1 – emphasis added).

[4] Although fair use, as a legal doctrine allowing “limited use of copyrighted material without requiring permission from the rights holders, such as use for scholarship or review […] [and providing] for the legal, non-licensed citation or incorporation of copyrighted material in another author’s work under a four-factor balancing test” (see http://en.wikipedia.org/w/index.php?title=Fair_use&oldid=192802994) is specific to the US legal system, the general principle underlined by Felten and taken as a basis for the analysis of this contribution is not US-centric, either when focusing specifically on copyright law (as most countries have similar provisions, as is the case of the “exceptions and limitations” to the exclusive rights of the author in the Italian legal system) or, more to the point, when applying the principle to other legal fields or extending it beyond legal analysis, as is the case for this contribution.

[5] See inter alia see D.L. Burk-J. Cohen, Fair Use Infrastructure for Rights Management Systems, in 15(1) Harvard Journal of Law & Technology, 2001; S. Bechtold, The Present and Future of Digital Rights Management – Musings on Emerging Legal Problems, in Eberhard Becker et al (eds.), Digital Rights Management – technological economical, legal and political aspects, Springer, Lecture Notes in Computer Science 2770, November 2003; J.S. Erickson, Fair Use, DRM, and Trusted Computing, in 46(4) Communications of the ACM, ACM, 2003; P. Samuelson, DRM {and, or, vs.} the Law, in 46(4) Communications of the ACM, ACM, 2003; T.K. Armstrong, Digital Rights Management and the Process of Fair Use, in 20(3) Harvard Journal of Law & Technology, 2006; I. Brown (ed.), Implementing the European Union Copyright Directive, FIPR, 2003; S. Dusollier, Fair Use by Design in the European Copyright Directive of 2001, in 46(4) Communications of the ACM, ACM, 2003; U. Gasser-M. Girsberger, Transposing the Copyright Directive: Legal Protection of Technological Measures in EU-Member States, Berkman Center for Internet and Society, Berkman Publications Series No. 2004-10, 2004; G. Westkamp (ed.), The Implementation of Directive 2001/29/EC in the Member States, Study MARKT/2005/07/D, Part II, February 2007 (in particular sec. I.B, sec. I.D.III, sec. I.D.V, sec. I.D.VI and part III); G. Mazziotti, EU Digital Copyright Law and the End-User, Springer, Berlin, forthcoming, 2008.

[6] See inter alia the reports of the project “OpenNet Initiative” ( http://opennet.net/). Drawing from the taxonomy developed by the OpenNet project, in this contribution I will mostly refer to the cases of “technical blocking”, i.e. using automated systems to decide whether a particular Internet resource should be made accessible or not. The other cases discussed by the OpenNet project, namely “search results removal”, “take-down” and “induced self-censorship” appear less relevant for the conceptual framework I purport to develop, although all of them – and particularly the last, “induced self-censorship” (i.e. “encouraging self-censorship both in browsing habits and in choosing content to post online […] through the threat of legal action, the promotion of social norms, or informal methods of intimidation [as well as] [t]he perception that the government is engaged in the surveillance and monitoring of Internet activity, whether accurate or not, [which] provides another strong incentive to avoid posting material or visiting sites that might draw the attention of authorities”) are indeed relevant in a more general discussion on the ethics of ICT-based practices. See http://opennet.net/about-filtering for further discussion on the taxonomy proposed by the OpenNet project.

[7] The debate on the so-called principle of “Network Neutrality” is another case in which the legal legitimacy of Internet Access Providers penalising (slowing down or blocking tour court) access to certain Internet resources has been questioned. However, the debate so far does not seem to have focused specifically on the implications of (some) Internet Access Providers using automated systems to achieve their goals, but rather on more general aspects related to freedom of expression (see supra n. 6), competition law (T.R. Beard-G.S. Ford-T.M. Koutsky-L.J. Spiwak, Network Neutrality and Industry Structure, in 29 Hastings Communications and Entertainment Law Journal, 2007, p. 149; M. Cave-P. Crocioni, Does Europe Need Network Neutrality Rules?, in 1 International Journal of Communication, 2007, pp. 669-679) and the economic analysis of innovation dynamics (M.A. Lemley-L. Lessig, The End of End-to-End: Preserving the Architecture of the Internet in the Broadband Era, UC Berkeley Law & Econ Research Paper No. 2000-19; Stanford Law & Economics Olin Working Paper No. 207; UC Berkeley Public Law Research Paper No. 37, 2000; T. Wu, Network Neutrality: Competition, Innovation, and Nondiscriminatory Access, Working Paper, 2006; S. Crawford, Internet Think, in Journal of Telecommunications and High Technology Law, 2007).

[8] This contribution will not discuss the general relationship between ethics and the law, i.e. whether the law should promote any particular kind of ethics. It will, however, touch upon the issue whether the law should promote a vision of ethics that deprives human beings from the possibility to choose a particular course of action – therefore, one might say, depriving the concept of “ethics” of its very foundation.

[9] Obviously, the fact that the systems discussed in this article are being used to regulate activities which are or might be conducted via the Internet raises another issue, i.e. what should be the appropriate “social environments” which rules ought to be taken as a reference when assessing human behaviour in a global environment which interconnects very different cultures and social groups. While the problem exists and does not easily lend itself to clear-cut answers à la “information wants to be free”, this contribution will not delve too much on it, mainly because its aim is to discuss more fundamental – and, one might say, abstract – questions related to the ethics of using automated systems for assessing the ethics of a human action. Answering these questions might very well lead to the conclusion that even though some form of filtering might be desirable, due to the differences among different social groups and the desire to shield members of one social group from the “damaging” cultural expressions of another, implementing such filtering through automated systems would be unacceptable even for the social group that wants to be shielded.

[10] In this contribution the definition of “technology” introduced in R.G. Lipsey, K. I. Carlaw, C. T. Bekar, Economic Transformations – General Purpose Technologies and Long Term Economic Growth, Oxford University Press, 2005, p.58, will be used, i.e. “the set of ideas specifying all activities that create economic value […] comprising (1) knowledge about product technologies, the specifications of everything that is produced; (2) knowledge about process technologies, the specifications of all processes by which goods and services are produced; (3) knowledge about organisational technologies, the specification of how productive activity is organised in productive and administrative units for producing present and future goods and services”.

[11] See supra n. 3.

[12] See supra n. 6.

[13] Trusted Computing “is a technology […] through which the computer will consistently behave in specific ways, and those behaviours will be enforced by hardware and software. Enforcing this Trusted behaviour is achieved by loading the hardware with a unique ID and unique master key and denying even the owner of a computer knowledge and control of their own master key. Trusted Computing is extremely controversial as the hardware is not merely secured for the owner; enforcing Trusted behaviour means it is secured against the owner as well” (see http://en.wikipedia.org/w/index.php?title=Trusted_Computing&oldid=193004477 ). For more information, see the specifications available on the web site of the Trusted Computing Group at https://www.trustedcomputinggroup.org/specs/.

[14] See supra n. 5.

[15] On September 10, 2007, Franco Frattini, Vice-president of the European Commission and Commissioner for Justice, Freedom and Security, declared that he intended “”to carry out a clear exploring exercise with the private sector … on how it is possible to use technology to prevent people from using or searching dangerous words like bomb, kill, genocide or terrorism” (Reuters, Web search for bomb recipes should be blocked: EU, 10 September 2007). Mr. Frattini does not seem to have advanced any concrete proposal after his declaration, and to be fair (and optimistic) it is questionable whether what he implied as a goal for his “exploratory exercise” was the creation of automated systems to filter/censor access to “terrorist” information or rather of a fast-lane for law enforcement agencies to block certain web-sites ex post.

[16] In Italy, Decree 8 January 2007 of the Minister of Communications (“Technical requirements of filtering tools that providers of connectivity to the Internet must use, with the goal to inhibit, pursuant to existing law, access to web-sites as indicated by the National Center for Fighting Paedopornography”) obliges all Internet Access Providers to routinely check a list provided by the Ministry of Interior and automatically block access to any web-site contained in that list within 6 hours of receiving the list. The decree does not contain any provision on whom should be able to access that list – for example, to question the inclusion of a particular web-site – nor how the owners of a web-site that ended up on the list could ask its removal.