This paper addresses the crucial subject of Internet governance, focusing on technical regulation. It analyses a large sample of current off-the-shelf filtering services (software packages, server based solutions, …) to be used in the home or in the school or even now in companies, from the unusual point of view of criteria (or categories) of filtering. The results of this study raise major ethical issues.
As a rule, the survey is based on the documentation put on the WWW by the providers of the filtering services. Services totally devoted to firms or with insufficient documentation are not considered.
The survey concerns a sample of 45 filtering services. It focuses on the access control to Internet sites (i.e. anything with a URL [Uniform Resource Locator]). From a technical point of view, this control can be maintained either at the level of the entry point or at the level of the content itself. At the entry point level, filtering can be based either on ratings (i.e. labelling) only (cf. mainly PICS [Platform for Internet Content Selection] ratings) or on classifications into lists of URLs (generally ‘black’ lists or, sometimes, lists of suggested sites) or on both ratings and lists of URLs. Filtering at the content level implies that both rating and filtering are managed in real time by software.
To ‘fix criteria for rating/classifying’ is not value-neutral and to ‘rate/classify’ can imply moral judgments. From an ethical point of view, it is thus very important that the final user (parent, teacher, …) can either do it him(/her)self (but this could be a very difficult job) or find both criteria and a rating in accordance with his (/her) own value judgments. Thanks to PICS, users can choose their filtering service and their label sources independently. This can obviously improve the situation. Thus, PICS based services are here distinguished from the others.
In the PICS rating services of the sample, the most frequent categories are: sex (6/9), violence (6/9), age (5/9), intolerance/hate speech (5/9), gambling (3/9), profanity (3/9) and language (3/9)… In the analysed black lists, the criteria are as follows: sex (15/20), age (10/20), intolerance/hate speech/racism (9/20), illegal/criminal/weapons/anarchy (9/20), violence (8/20), gambling (8/20), drugs (8/20), games/time waster/distractions/leisure (8/20),…
As regards the definition of criteria, the sample includes very different cases. Indeed, the categories are, generally, completely predefined but, in some rare filtering services, they have to be totally or partially fixed by the final user. Focusing on the authors of the predefined criteria is very interesting. In the studied sample, criteria to classify URLs are mostly (22/25) fixed by the commercial firm which provides the filtering. Ethical issues are obvious with this kind of service: users are linked to the subjective value judgments of this corporation. Apart from two Canadian corporations, all these firms (i.e. 23) are located in U.S., frequently in California (7/23). Criteria are defined only in English with one exception. Obviously, European users cannot find the reflection of their cultural diversity in these kinds of filtering services. The situation is a little less negative with the nine analysed PICS services: five of them have their categories fixed by non-profit organisations. Moreover, four of them use categories defined outside U.S. (2 in Canada, 1 in U.K. and 1 in Italy [and thus written in Italian]).
But who are the authors of the rating/classifying itself? In the cases of predefined lists of URLs in the sample (for which this information is available), the firms are always responsible for classifications: these are performed, most frequently, by employees (usually with the help of software) and, sometimes, by a proprietary software only. Moreover, except for two services, the lists of URLs are kept secret by the corporations. These facts are ethically worrying. Again, the observations about the PICS rating services are less black. In the sample, there are three third party PICS ratings of which two are independent of any firm. Moreover this standard allows an interesting solution : self rating (which is encountered in six analysed cases). On the other hand, let us add that, from an ethical point of view, filtering at the level of the content is not the best solution. Indeed it implies always that the rating is totally in charge of the algorithm author, i.e. the company.
The current state of the filtering service market should thus give some thought to all the people concerned by showing respect for individual values and for cultural diversity…