Researchers have estimated that more than 2500 systematic reviews are indexed in MEDLINE annually,2
but not all completed reviews are published and some may be prone to selective reporting of outcomes. An examination of 47 Cochrane reviews found that almost all (n
= 43) contained a major change (such as the addition or deletion of outcomes) between the protocol and the full publication.3
Whether or to what extent the changes reflected bias is not clear. A survey of 348 corresponding authors of systematic reviews revealed that authors had, on average, the same number of published reviews as unpublished reviews (median = 2), implying a rate of nonpublication of 12.4%.4
Excessive duplication of systematic reviews is common and likely will continue to occur in the absence of a registry. In 2006, for example, a case study of the quality of reporting of systematic reviews found that 10 separate reviews on the role of acetylcysteine in the prevention of contrast-associated nephropathy published between 2003 and 2005 were identified5
and at least 5 more reviews on this topic have been published subsequently. The reviews varied in their recommendations and quality of reporting. Nevertheless, we question whether this many systematic reviews on a single topic are necessary. Such duplication of effort represents tremendous cost in human and financial resources that could be used for other research or in the delivery of health care.
Reviews that are unpublished, like any unpublished research, contain potentially valuable information. Extrapolating from a rate of nonpublication for systematic reviews of 12.4%,4
one estimate suggests that the contributions to clinical research of about a third of a million participants annually might be unused in a systematic review.4
Many agencies and organizations are considering how to prioritize the completion of systematic reviews and are trying to balance the needs of patients, clinicians and other decision-makers with the resources available.6
Given that many organizations need reviews on the same topics (such as assessment of medications and technologies), avoiding unnecessary duplication would free resources from these agencies to tackle other topics of research. A registry of reviews would allow decision-makers and researchers to determine more easily which reviews are in development by other groups. We anticipate that this information could facilitate collaboration by bringing together research groups with common interests and perhaps making the process of developing reviews more efficient by increasing the number of team members involved.
Enhanced collaboration could also allow reviews to be updated more readily because the task of updating could be shared across groups of researchers. Producers and funders of systematic reviews recognize the challenge of updating reviews as well as the importance of harmonizing various aspects of the updating process.7
Rapid reviews have become more popular recently because of a need for immediate decision-making in political and other arenas. A registry of existing reviews is likely to be a helpful starting point for such urgent decision-making. Finally, the registration of protocols for systematic reviews would help reviewers and editors to detect bias in the reporting of outcomes.