Despite the shortcomings of traditional scientific peer review, the most counterproductive of which involve reviewer bias, the process does clearly provide some benefits, which include:
- Mechanism for rationing the limited space within paper journals
- System for improving manuscripts prior to publication
- Framework for identifying the scientific and written quality of papers
Fortunately new internet methodologies make it possible to re-imagine, expand on, and enhance peer review. First, limited space has become a non-problem on the internet. The costs associated with storing even large documents with detailed images, and even video, are becoming less and less expensive, almost to the point of becoming free. As a result, there is no reason to reject any reasonably credible manuscript from being published simply to conserve journal space. Moreover, once published, internet search tools enable relatively straightforward and expedient processes for locating papers on a topic of interest.
Second, given the capacity of the internet to publish and identify a near limitless amount of published content, reviewers of new articles have the opportunity to redirect their energies away from rationing journal space to simply improving the manuscripts they review. With paper rejection no longer in the equation, the problem of reviewer bias begins to dissipate. The job of the reviewer evolves to its intended role, to recommend changes to a paper which should (theoretically at least) improve the overall quality of the manuscript. In this new model, the reviewer works alongside the author, much like a book editor, to produce the best possible scientific paper. This symbiotic effort strives to make the author's novel concepts, information and methodologies more intelligible to readers rather than deconstruct and reject that which does not meet oftentimes arbitrary (yet stringent) criteria for publication.
However, can such a model for peer review result in scholarly quality?
Although not peer reviewed in the traditional sense of most medical journals, Wikipedia is an example of vast knowledge being documented and shared by harnessing the collective power of individuals with common interests and where only a loose editorial authority is needed to discipline the process.[9
] Another example of a new and more open procedure for peer reviewed science, which has been widely embraced in theoretical physics and mathematical circles, is a journal called ArXiv
. Papers are uploaded onto the www.ArXiv.org
website without any prior formal peer review, but after manuscripts have been electronically published, interested researchers are invited to critique, comment and debate on them. Within ArXiv
, peer review is an open, post publication “wrestling” process, that more closely mirrors the philosophical workings of science.
Third and lastly, any new method of peer review must address the need to provide a framework for assessing the quality of published papers. It is in this realm that the internet now provides an amazing array of tools, the most powerful of which stem from the collective intelligence of large numbers of individuals and termed “crowd sourcing”. In his book “The Wisdom of Crowds
”, James Surowiecki, describes how large groups of individuals, when working together on a problem provide sufficient statistical power to arrive at answers that defy individual efforts.[10
] For example, individual estimates of the number of jelly beans (or coins) within a large jug can vary widely, but when a large number of such guesses are averaged, the final answer tends to be very close to the true value. On the internet it is now common place for social networks like Yelp.com and Youtube.com to survey consumers about a range of topics (e.g. the quality of restaurants, books, videos, etc.). By computing an average value for such responses, myriad websites now guide users through a wide assortment of purchase decisions. Although there are clearly limitations to such recommendations, (and even well documented abuses) especially when the number of responses limits statistical power, consumers within modern society have come to routinely rely on the authority of such “crowd sourcing”.[11
] In many ways, the idea of a collective intelligence harkens back to Abraham Lincoln's discerning observation that “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”
Online resources such as Wikipedia also represent the collected intelligence of the many. Although no over-arching authority is in charge of ensuring the veracity of Wikipedia entries, a reasonably careful analysis published in Nature
in 2005 concluded that the information in Wikipedia was on average as reliable, if not more, than the encyclopedia Britannica.[2
] These examples show that “crowd sourcing” consensus can be self-correcting and can help ensure a high measure of quality information. If so, it would seem not too big an extrapolation to suggest that such principles could be useful in assessing the quality of scholarly papers, as opposed to merely relying on the judgments of two “expert” and not so dispassionate reviewers.