Our analysis suggests that, even with the best intentions, evidence-based medicine has to rely on less recent evidence. Even when results were corrected for the year of the last literature search, few systematic reviews containing meta-analyses in the CDSR included trials published in the preceding 5 years. Almost a third of these included no trials published in the last decade, and in another 10% the statistical significance of the result related to the study primary outcome would have been different had the data been limited to the last decade. Most of the systematic reviews did not address the implications of including less recent data.
Meta-analyses published in high-profile peer-reviewed journals tend to address newer interventions than the average CDSR review. Accordingly, almost all of them included some trials published in the last 5 years, and all of them included some trials published in the last decade. Nevertheless, even in these meta-analyses, the large majority also included 1 or more older trials, and very few discussed the implications of including older evidence.
Typically, the lack of recent evidence did not result from the CDSR systematic reviews being out of date; in fact, the majority of systematic reviews that we analyzed had been updated in the last 2 years. Nonetheless, few systematic reviews discussed the implications of the time of publication on the relevance of the evidence.
Evidence should not be undervalued simply because of its age. The amount of data, regardless of year of publication, is limited for most health care topics,11-13
and we do not have the luxury of discarding trials simply because of their calendar year. In the case of topics for which well-designed old clinical trials are still relevant and conclusive, it is imprudent, and even unethical, to conduct new trials.
Occasionally, earlier published results may differ from those reported in later publications.14-16
This may reflect bias,17-19
time-dependent efficacy (e.g., when the treatment benefit decreases with longer follow-up),20
or chance. For example, in the case of vitamin E supplementation for prevention of morbidity and mortality in preterm infants, the review authors suggest caution in interpreting and applying current evidence. The available data span over a decade (1991-2002), a period in which many advances were made in the field of preterm care.23
However, one cannot generalize. Less recent trials are not necessarily of worse quality,24,25
or less externally valid4,5
than newer ones.2
Each topic needs a careful case-by-case scrutiny of whether the available evidence is relevant to current practice. The availability of evidence is sometimes further restricted by the lack of standardized outcomes across trials. Selective reporting of “positive” outcome results is an added threat.27-29
Some limitations should be discussed. Although we used a standardized approach to select the year of publication, a trial may be in progress for many years before any results are published. Most trials do not specify when they started and completed enrolment and follow-up. Efficacy trials may take 3 to 10 or more years from the start of enrolment to publication.17
Therefore, the proportion of recently conducted trials is likely even smaller than what we report on the basis of publication year.
Second, we used the CDSR for our primary analyses because it is widely considered the most all-encompassing and up-to-date source for current evidence on health care interventions. However, even the CDSR represents work in progress, and it does not capture all interventions.30
Furthermore, some review authors may choose to exclude, a priori, less recent studies, especially in fast-moving areas of research, by restricting search years or requiring the reporting of methodological quality characteristics.
Our evaluation of systematic reviews published in medical journals was unavoidably more restricted, since some information (such as primary outcome) is not standardized and is readily available in the same detail as in the CDSR systematic reviews.
We should also caution that decision-making based on nominal statistical significance is precarious.31,32
A change in statistical significance does not mean that the estimated effect size is altered beyond chance. Neither can it be attributed with certainty to less recent studies, since many underlying factors (including chance alone) contribute to the uncertainty of the effect estimate. Given that most meta-analyses had very limited data overall, there was large uncertainty in the estimated effect size in recent compared with less recent published data. Direct comparisons of recent against less recent data would be underpowered to show even major differences in effect sizes in these meta-analyses.
However, some empirical evidence suggests that, in some fields, smaller treatment effects may be encountered in more recent trials than in earlier research.1,14-16,33
In the present evaluation, in the meta-analyses in which the formal statistical significance status of the summary effect changed with the exclusion of less recent data, the median change in the odds ratio was also 23%. This is a considerable change, given that most medical interventions have modest effects.
Acknowledging these caveats, our survey suggests that even though the CDSR reviews are frequently updated, evidence from very recently published studies for most health care interventions is scant. Although less recent studies should not be discarded, clinicians should interpret medical evidence with attention to the applicability and relevance of these studies to current clinical practice. If evidence on a specific topic is considered to be outdated or missing, and the review question remains salient, the scientific community should be sensitized toward conducting relevant targeted studies.