We have little or no evidence that peer review 'works,' but we have lots of evidence of its downside.
Firstly, it is very expensive in terms of money and academic time. At the British Medical Journal
we calculated that the direct cost of reviewing an article was, on average, something like £100 and the cost of an article that was published was much higher. These costs did not include the cost of the time of the reviewing academics, who were not paid by the journal. The Research Information Network has calculated that the global cost of peer review is £1.9 billion [10
]. The cost in time is also enormous, and many scientists argue that time spent peer reviewing would be better spent doing science.
The cost in time and money is much increased by studies working their way down the food chain of journals. A study may be submitted to Nature and rejected, then sent to the New England Journal of Medicine and rejected, and so on through the Lancet, British Medical Journal, and several specialist journals before ending up in a local journal. Often the same reviewers will be consulted repeatedly. And we know that if authors persist long enough, you can get anything published.
This expensive and time consuming process might be acceptable if it sorted the information effectively, with the most important studies being in the most important journals. Not only does this not happen (see below) but this ineffective sorting of information introduces an important bias - because the 'sexier' articles end up in the 'top' journals. The many people who read these journals because they think that they are reading what is most important are actually being presented with a distorted view of science.
Secondly, peer review is slow. The process regularly takes months and sometimes years. Publication may then take many more months. A friend of mine, a fellow of the Royal Society, has written a paper that I think very important for global health. As I write, it is still unpublished after two years of being reviewed by several 'top' journals. None of the reviewers have raised a major flaw with the study.
Thirdly, peer review is largely a lottery. Multiple studies have shown how if several authors are asked to review a paper, their agreement on whether it should be published is little higher than would be expected by chance [11
]. A study in Brain
evaluated reviews sent to two neuroscience journals and to two neuroscience meetings [12
]. The journals each used two reviewers, but one of the meetings used 16 reviewers while the other used 14. With one of the journals the agreement among the journals was no better than chance while with the other it was slightly higher. For the meetings the variance in the decision to publish was 80 to 90% accounted for by the difference in opinions of the reviewers and only 10 to 20% by the content of the abstract submitted.
A fourth problem with peer reviews is that it does not detect errors. At the British Medical Journal
we took a 600 word study that we were about to publish and inserted eight errors [13
]. We then sent the paper to about 300 reviewers. The median number of errors spotted was two, and 20% of the reviewers did not spot any. We did further studies of deliberately inserting errors, some very major, and came up with similar results.
The fifth problem with pre-publication peer review is bias. There have been many studies of bias - with conflicting results - but the most famous was published in Behavioural and Brain Sciences
]. The authors took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realise that they had already published the paper, and eight of the remaining nine were rejected - not because of lack of originality but because of poor quality. The authors concluded that this was evidence of bias against authors from less prestigious institutions. Most authors from less prestigious institutions, particularly those in the developing world, believe that peer review is biased against them.
Perhaps one of the most important problems with peer review is bias against the truly original. Peer review might be described as a process where the 'establishment' decides what is important. Unsurprisingly, the establishment is poor at recognizing new ideas that overturn the old ideas. It is the same in the arts where Beethoven's late string quartets were declared to be nothing but noise and Van Gogh managed to sell only one painting in his lifetime. David Horrobin, a strong critic of peer review, has collected examples of peer review turning down hugely important work, including Hans Krebs's description of the citric acid cycle, which won him the Nobel prize, Solomon Berson's discovery of radioimmunoassay, which led to a Nobel prize, and Bruce Glick's identification of B lymphocytes [15
Finally, peer review can be all too easily abused. Reviewers can steal ideas and present them as their own or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened. Drummond Rennie tells the story of a paper he sent, when deputy editor of the New England Journal of Medicine
, for review to Vijay Soman [16
]. Having produced a critical review of the paper, Soman copied some of the paragraphs and submitted it to another journal, the American Journal of Medicine
. This journal, by coincidence, sent it for review to the boss of the author of the plagiarised paper. She realised that she had been plagiarised and objected strongly. She threatened to denounce Soman but was advised against it. Eventually, however, Soman was discovered to have invented data and patients and left the country.