Open source produced encyclopedias: Towards a broader view of expertise

AUTHOR
Paul B. de Laat

ABSTRACT

Open source as a method of production has spread from software to other types of content like reference works. In previous work (De Laat 2011) 6 online encyclopedias of the kind were analyzed. The strongest clash of opinions concerned the role of experts and expertise in creating an encyclopedia. On one end of the scale we find Encyclopedia of Earth and Scholarpedia that emphasize the crucial role of expertise; as a result, the creation and moderation of evolving articles is largely reserved to experts. On the other end we find Wikipedia that de-emphasizes the role of experts; the editing process is open to anybody without distinctions.

Wikipedia is currently more successful than all other encyclopedias of the kind: the total number of entries is in the millions (for the English version alone). More importantly, the quality of Wikipedian entries is astonishingly high. That at least would seem to be indicated by several investigations that compared Wikipedia to classic encyclopedias (Britannica, Encarta and Brockhaus) concerning the same selection of scientific subjects. Experts rated articles from these sources as being of about equal—though varying—quality. Wikipedia passed a kind of Turing test, while experts were unable to reliably distinguish between Wikipedia and other encyclopedias.

How is this astonishing finding to be explained? One possible explanation is to argue that although amateurs predominate, some proper experts have come along and made all the difference. This sounds rather unlikely while reportedly very few experts participate at all. Another explanation argues that some of the laymen involved have become proper experts themselves while working on the entries involved. This would also seem to be far-fetched as most expertises require considerable investments in time and energy for their acquisition. Therefore I want to explore a third kind of explanation: the conjecture that amateurs, by continuous discussion in wiki spaces, may acquire enough capabilities to produce reference articles of high quality. That is, they do not become true experts that are able to actually do the science involved—they only become ‘conversational’ partners of the experts involved. That is enough, however, for Wikipedian entries to pass the Turing test for quality as described above.

Cognoscenti recognize, of course, the very concept of ‘interactional expertise’ as coined by Collins and Evans (2007). They argue that between ubiquitous expertises (like popular understanding of science) and the specialist expertise that is capable of actually doing the science involved (‘contributory expertise’) another specialist expertise can be identified: interactional expertise. Its possessors can engage in intelligent conversation concerning the domain involved—without for that matter being able to contribute to it. Linguistic competences are developed, not practical ones. This communication medium is the province of research journalists, scientists involved in peer review, and the like.

To what extent is this a plausible conjecture for the success of Wikipedia? An argument pro is that wiki software links a discussion page to each textual entry. Sometimes lengthy discussions take place on them. This is a clear indication of linguistic exchanges occurring that may contribute to developing interactional capabilities. On the other hand, nasty edit wars may erupt between competing factions; as a result, learning processes will suffer (cf. Sanger 2009). Also, real experts may just stay away from an entry, ruling out most of the needed linguistic interactions. Finally some expertises may just be too hard to become acquainted with.

This conjecture can usefully be connected with discussions about quality of Wikipedian articles (for more details see de Laat 2011). On the one hand we observe fierce debate inside Wikipedia about how to uphold quality in view of so-called vandalism. One démarche considered is a system of review: each and every edit is to be scrutinized for vandalism before insertion in the public version of the entry involved. Which criteria are to be used in designating reviewers? In the German Wikipedia (review system in operation for 2 years now) registered users are considered fit for the surveying job once active for 60 days and having performed at least 300 edits. So high edit count is the main criterion. I will argue that it is not so much intended to indicate a kind of expertise (e.g., editing expertise) as loyalty and dedication to the Wikipedian enterprise. Moral—not epistemological—qualities are gauged.

On the other hand, a burgeoning research stream in computer science is also targeting the quality problem. The leading approach is to construct computational metrics that purportedly measure credibility of entries. A promising method is based on their revision histories (as available on Wikipedian servers) and focuses on the survival of individual edits over time. Each round of editing is seen as casting a vote upon edits in sight. The more often edits remain intact, the more both credibility of the text and reputation of the author as capable contributor rise (and vice versa). So the measure of author productivity suggested here is edit longevity. Possibly it will be used in future for appointing reviewers that judge quality proper. I will argue that such author reputation—targeting the epistemic qualities of authors—indicates precisely the Collins-and-Evans mid-category of interactional experts. We cannot disentangle, of course, whether we are dealing with interactional or possibly contributory experts—they will exhibit, so to speak, the same linguistic behaviour (cf. transitivity of expertises).

If this interactional view on Wikipedian policy is correct, it would reflect on the editorial policies used by some of the other online encyclopedias. Precisely the egalitarian approach seems suitable for nurturing competences. This is neglected with Scholarpedia and Encyclopedia of Earth: they only admit recognized (contributory) experts. As a result, interactional experts—whether nascent or accomplished—and their possible contributions are simply excluded. With Citizendium prospects are better: anyone is admitted. But it all depends on the leadership style of the ‘moderating’ (contributory) expert whether a suitable learning process comes to fruition or is nipped in the bud.

REFERENCES

H. Collins and R. Evans. 2007. Rethinking expertise. Chicago and London: The University of Chicago Press.

P.B. de Laat. 2011. Open source production of encyclopedias: Editorial policies at the intersection of organizational and epistemological trust. Social Epistemology (under consideration).

L.M. Sanger. 2009. The fate of expertise after Wikipedia. Episteme, 6(1): 52-73.