|Home | About | Journals | Submit | Contact Us | Français|
More than 5500 systematic reviews, at various stages of completion, are currently available in the Cochrane Library (www.cochrane.org) and 9000 systematic reviews are critically appraised for validity in the Database of Abstracts of Reviews of Effectiveness, or DARE (www.crd.york.ac.uk). Funding agencies such as the United Kingdom Medical Research Council and the Canadian Institutes of Health Research require systematic reviews as part of the rationale to fund randomized trials. Graduate students are often encouraged to complete a systematic review as part of their thesis and clinical trainees often perform such reviews during research-related elective courses.1
No consistent strategy exists for registering protocols or results. Slightly less than half of reports from systematic reviews indicate the existence of a protocol for their development.2 We believe it is time to consider creating a registry of protocols for systematic reviews and of completed reviews.
Researchers have estimated that more than 2500 systematic reviews are indexed in MEDLINE annually,2 but not all completed reviews are published and some may be prone to selective reporting of outcomes. An examination of 47 Cochrane reviews found that almost all (n = 43) contained a major change (such as the addition or deletion of outcomes) between the protocol and the full publication.3 Whether or to what extent the changes reflected bias is not clear. A survey of 348 corresponding authors of systematic reviews revealed that authors had, on average, the same number of published reviews as unpublished reviews (median = 2), implying a rate of nonpublication of 12.4%.4
Excessive duplication of systematic reviews is common and likely will continue to occur in the absence of a registry. In 2006, for example, a case study of the quality of reporting of systematic reviews found that 10 separate reviews on the role of acetylcysteine in the prevention of contrast-associated nephropathy published between 2003 and 2005 were identified5 and at least 5 more reviews on this topic have been published subsequently. The reviews varied in their recommendations and quality of reporting. Nevertheless, we question whether this many systematic reviews on a single topic are necessary. Such duplication of effort represents tremendous cost in human and financial resources that could be used for other research or in the delivery of health care.
Reviews that are unpublished, like any unpublished research, contain potentially valuable information. Extrapolating from a rate of nonpublication for systematic reviews of 12.4%,4 one estimate suggests that the contributions to clinical research of about a third of a million participants annually might be unused in a systematic review.4
Many agencies and organizations are considering how to prioritize the completion of systematic reviews and are trying to balance the needs of patients, clinicians and other decision-makers with the resources available.6 Given that many organizations need reviews on the same topics (such as assessment of medications and technologies), avoiding unnecessary duplication would free resources from these agencies to tackle other topics of research. A registry of reviews would allow decision-makers and researchers to determine more easily which reviews are in development by other groups. We anticipate that this information could facilitate collaboration by bringing together research groups with common interests and perhaps making the process of developing reviews more efficient by increasing the number of team members involved.
Enhanced collaboration could also allow reviews to be updated more readily because the task of updating could be shared across groups of researchers. Producers and funders of systematic reviews recognize the challenge of updating reviews as well as the importance of harmonizing various aspects of the updating process.7 Rapid reviews have become more popular recently because of a need for immediate decision-making in political and other arenas. A registry of existing reviews is likely to be a helpful starting point for such urgent decision-making. Finally, the registration of protocols for systematic reviews would help reviewers and editors to detect bias in the reporting of outcomes.
A registry of systematic reviews differs in purpose and importance from a registry of clinical trials. With fewer than 100 participants, the size of the average randomized trial is small,8 whereas systematic reviews are comparatively large. Concealing results or selectively reporting outcomes of a clinical trial can directly influence the care of patients. But withholding knowledge of the existence and results of a systematic review would be more likely to influence guidelines for clinical practice or decisions in health-related policy.
Some registries of systematic reviews exist, but to date they are limited in scope by the mandate of their organizations. The Cochrane Collaboration requires that authors register their complete protocol in the Cochrane Database of Systematic Reviews, which allows others to review it and comment. Similarly, the Campbell Collaboration (www.campbellcollaboration.org) has a registry of protocols for reviews. The National Public Health Service for Wales (www.wales.nhs.uk) launched a registry of systematic reviews in an effort to reduce duplication. To register a review, information was required to be provided on the research question, the search methods and criteria used for selection, the timeline of the study and the authors. Its pilot project was completed in May 2008 and data are not yet available on the impact of the project.
The goals of a registry of systematic reviews would be to avoid unnecessary duplication, allow better clarity around the conduct and analysis of reviews, avoid publication bias and selective reporting of outcome-related bias, promote collaboration and assist with prioritization. We believe the registry should initially focus on reviews examining interventions in health care, because such reviews are the most frequently reported and their methods are the most advanced. Eventually, the registry should be expanded to include all systematic reviews. We believe this registry should be part of existing strategies for clinical trial registries to exploit existing technology and resources. A coexisting registry of trials and reviews might also promote collaboration by trialists and reviewers interested in the same topics.
A minimum data set should be collected on each review (Box 1). A registration number would be provided to the investigator upon completion of registration. Each year an automated alert would be sent to the investigator asking for an update, which would allow the site to be maintained. Citations for completed reviews would also be added as they become available from the authors. Providing complete reviews would facilitate open access for end-users. The investigators who initiate the study should be responsible for completing the registration. The contents of the database should be freely accessible and easily searched.
Similar to the process of registering clinical trials, editors of journals could encourage the registration of systematic reviews and require a registration number before consideration of a manuscript. Editors of journals would find this a useful measure for guarding against selective reporting of outcomes in their ongoing efforts to promote transparency. Funding agencies could also encourage registration as part of their funding processes, which would further their agendas of open access and transparency.
Registration of systematic reviews will require additional work by reviewers to submit their protocols, will consume additional resources for expanding the current process of registering clinical trials and will not solve all of the problems that exist in the publication of research related to health care. If promoted, endorsed and adhered to, however, it will likely reduce publication bias and increase transparency.
The authors thank Sir Iain Chalmers, Dr. Ian Graham and Dr. Deborah Zarin for commenting on an earlier version of this paper.
Competing interests: Sharon Straus is an associate editor for ACP Journal the Club and Evidence-Based Medicine and is on the advisory board of BMJ Evidence Centre and a member of the international advisory board of BMJ Group. None declared for David Moher.
Sharon Straus is the section editor of Reviews at CMAJ and was not involved in the editorial decision-making process for this article.
Contributors: Both of the authors were involved in the development of the concepts in the manuscript and the drafting of the manuscript, and both approved the final version submitted for publication.
Previously published at www.cmaj.ca