Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their specialty,1 2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some medical journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers’ ability to assess the strengths and weaknesses of those reviews.
Several early studies evaluated the quality of review reports. In 1987 Mulrow examined 50 review articles published in four leading medical journals in 1985 and 1986 and found that none met all eight explicit scientific criteria, such as a quality assessment of included studies.5 In 1987 Sacks and colleagues evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in six domains.6 Reporting was generally poor; between one and 14 characteristics were adequately reported (mean 7.7, standard deviation 2.7). A 1996 update of this study found little improvement.7
In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomised controlled trials.8 In this article, we summarise a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (see box).
Conceptual issues in the evolution from QUOROM to PRISMA
Completing a systematic review is an iterative process
The conduct of a systematic review depends heavily on the scope and quality of included studies: thus systematic reviewers may need to modify their original review protocol during its conduct. Any systematic review reporting guideline should recommend that such changes can be reported and explained without suggesting that they are inappropriate. The PRISMA statement (items 5, 11, 16, and 23) acknowledges this iterative process. Aside from Cochrane reviews, all of which should have a protocol, only about 10% of systematic reviewers report working from a protocol.9 Without a protocol that is publicly accessible, it is difficult to judge between appropriate and inappropriate modifications.
Conduct and reporting of research are distinct concepts
This distinction is, however, less straightforward for systematic reviews than for assessments of the reporting of an individual study, because the reporting and conduct of systematic reviews are, by nature, closely intertwined. For example, the failure of a systematic review to report the assessment of the risk of bias in included studies may be seen as a marker of poor conduct, given the importance of this activity in the systematic review process.10
Study-level versus outcome-level assessment of risk of bias
For studies included in a systematic review, a thorough assessment of the risk of bias requires both a study-level assessment (such as adequacy of allocation concealment) and, for some features, a newer approach called outcome-level assessment. An outcome-level assessment involves evaluating the reliability and validity of the data for each important outcome by determining the methods used to assess them in each individual study.11 The quality of evidence may differ across outcomes, even within a study, such as between a primary efficacy outcome, which is likely to be carefully and systematically measured, and the assessment of serious harms,12 which may rely on spontaneous reports by investigators. This information should be reported to allow an explicit assessment of the extent to which an estimate of effect is correct.11
Importance of reporting biases
Different types of reporting biases may hamper the conduct and interpretation of systematic reviews. Selective reporting of complete studies (such as publication bias),13 as well as the more recently empirically demonstrated “outcome reporting bias” within individual studies,14 15 should be considered by authors when conducting a systematic review and reporting its results. Although the implications of these biases on the conduct and reporting of systematic reviews themselves are unclear, some research has identified that selective outcome reporting may occur also in the context of systematic reviews.16