The 44 meta-analyses in 28 systematic reviews included in this study covered a wide range of medical topics. The categories of patients included those with an increased risk of vascular occlusion, HIV infection, viral hepatitis C, gastro-oesophageal reflux disease, postoperative pain, heart failure, dyspepsia, and cigarette smoking. The results of adjusted indirect comparisons were usually similar to those of direct comparisons. There were a few significant discrepancies between the direct and the indirect estimates, although the direction of discrepancy was unpredictable. These findings are similar to (but more convincing than) those of our previous study of antibiotic prophylaxis in colorectal surgery.6
Discrepancies between the direct and the adjusted indirect estimate may be due to random errors. Partly because of the wide confidence interval provided by the adjusted indirect comparison, significant discrepancies between the direct and the adjusted indirect estimate were infrequent (3/44). Results that were significant when we used the direct comparison often became non-significant in the adjusted indirect comparison (table).
The internal validity of trials involved in the adjusted indirect comparison should be examined because biases in trials will inevitably affect the validity of the adjusted indirect comparison. In addition, for the adjusted indirect comparison to be valid, the key assumption is that the relative efficacy of an intervention is consistent in patients across different trials. That is, the estimated relative efficacy should be generalisable. Generalisability (external validity) of trial results is often questionable because of restricted inclusion criteria, exclusion of patients, and differences in the settings where trials were carried out.9
Of the 44 comparisons, three showed significant discrepancy (P<0.05) between the direct and the adjusted indirect estimate. In two cases the discrepancies seem to have no clinical importance as both the direct and the adjusted indirect estimates were in the same direction.w6 w17
However, the discrepancy between the direct and the adjusted indirect estimate was clinically important in another case, which compared paracetamol plus codeine versus paracetamol alone in patients with pain after surgery.w28
A close examination of this example showed that the discrepancy could be explained by different doses of paracetamol and codeine used in trials for the indirect comparison (box).
Importance of similarity between trials in adjusted indirect comparison
A meta-analysis by Zhang and Li Wan Po compared paracetamol plus codeine v paracetamol alone in postsurgical pain.w28 Based on the results of 13 trials, the direct estimate indicated a significant difference in treatment effect (mean difference 6.97, 95% confidence interval 3.56 to 10.37). The adjusted indirect comparison that used a total of 43 placebo controlled trials suggested there was no difference between the interventions (−1.16, −6.95 to 4.64). The discrepancy between the direct and the adjusted indirect estimate was significant (P=0.02). However, most of the trials (n=10) in the direct comparison used 600-650 mg paracetamol and 60 mg codeine daily, while many placebo controlled trials (n=29) used 300 mg paracetamol and 30 mg codeine daily. When the analysis included only trials that used 600-650 mg paracetamol and 60 mg codeine, the adjusted indirect estimate (5.72, −5.37 to 16.81) was no longer significantly different from the direct estimate (7.28, 3.69 to 10.87). Thus, the significant discrepancy between the direct and the indirect estimate based on all trials could be explained by the fact that many placebo controlled trials used lower doses of paracetamol (300 mg) and codeine (30 mg). This example shows that the similarity of trials involved in adjusted indirect comparisons should be carefully assessed.
When is the adjusted indirect comparison useful?
When there is no direct evidence, the adjusted indirect method may be useful to estimate the relative efficacy of competing interventions. Empirical evidence presented here indicates that in most cases results of adjusted indirect comparisons are not significantly different from those of direct comparisons.
Direct evidence is often available but is insufficient. In such cases, the adjusted indirect comparison may provide supplementary information.10
Sixteen of the 44 direct comparisons in this paper were based on one randomised trial while the adjusted indirect comparisons were based on a median of 19 trials (range 2-86). Such a large amount of data available for adjusted indirect comparisons could usefully strengthen conclusions based on direct comparisons, especially when there are concerns about the methodological quality of a single randomised trial.
Results of the direct and the adjusted indirect comparison could be quantitatively combined to increase statistical power or precision when there is no important discrepancy between the two estimates. The non-significant effect estimated by the direct comparison may become significant when the direct and the adjusted indirect estimate are combined, as happened in two of the 44 comparisons (fig ). In each case, the change was because of an increased amount of information.
Combination of direct and adjusted indirect estimates in two meta-analyses
It is also possible that the significant relative effect estimated by the direct comparison becomes non-significant after it is combined with the adjusted indirect estimate—for example, in a systematic review of H2 receptor antagonists versus sucralfate for non-ulcer dyspepsia.w20 The direct comparison based on one randomised trial found that H2 receptor antagonist was less effective than sucralfate (relative risk 2.74, 95% confidence interval 1.25 to 6.02), while the adjusted indirect comparison (based on 10 trials) indicated that H2 receptor antagonist was as effective as sucralfate (0.99, 0.47 to 2.08). The discrepancy between the estimates was marginally significant (P=0.07). The combination of the direct and adjusted indirect estimate provided a non-significant relative risk of 1.56 (0.93 to 2.75).
It will be a matter of judgment whether and how to take account of indirect evidence. It is not desirable to base such decisions on whether or not the difference between the two estimates is significant, although this is the easiest approach. A more constructive approach would be to base the decision on the similarity of the participants in the different trials and the comparability of the interventions.
Some authors have used a naive (unadjusted) indirect comparison, in which results of individual arms between different trials were compared as if they were from a single trial (Glenny AM, et al, international society of technology assessment in health care). Simulation studies and empirical evidence (not shown in this paper) indicate that the naive indirect comparison is liable to bias and produces overprecise estimates (Altman DG, et al, third symposium on systematic reviews: beyond the basics). The naive indirect comparison should be avoided whenever possible.
Direct estimates from randomised trials may not always be reliable
As has been observed, “randomisation is not sufficient for comparability.”11,12
The baseline comparability of patient groups may be compromised due to lack of concealment of allocation.13
Patients may be excluded for various reasons after randomisation and such exclusions may not be balanced between groups. Lack of blinding of outcome measurement may overestimate the treatment effects.13
Furthermore, empirical evidence has confirmed that published randomised trials may be a biased sample of all trials that have been conducted due to publication and related biases.14
Thus, direct evidence from randomised trials is generally regarded to be the best, but such evidence may sometimes be flawed. Observed discrepancies between the direct and the adjusted indirect comparison may be partly due to deficiencies in the trials making a direct comparison or those contributing to the adjusted indirect comparison, or both.
When there is no direct randomised evidence, the adjusted indirect method may provide useful information about relative efficacy of competing interventions. When direct randomised evidence is available but not sufficient, the direct and the adjusted indirect estimate could be combined to obtain a more precise estimate. The internal validity and similarity of all the trials involved should always be carefully examined to investigate potential causes of discrepancy between the direct and the adjusted indirect estimate. A discrepancy may be due to differences in patients, interventions, and other trial characteristics including the possibility of methodological flaws in some trials.