Our study provides a comprehensive assessment and comparison of the quality of reporting of adjusted analysis before and after the revision of the CONSORT Statement in 2001. In our review, we found that the characteristics of published reports of parallel group randomised trials indexed in PubMed in 2000 and 2006 were similar, though there was a significant improvement in primary outcome specification in 2006. Only a quarter of randomised trials reported any covariate adjustment analysis. The prevalence of adjusted analysis in our broad cohorts is much lower than the 72% reported in a previous review which was restricted to four high impact general medical journals in 1997 [1
] and 64% in a recent review conducted by Austin et al [14
]. Another review looked at 34 scientific medical journals in 1998 with a high impact factor and reported 31% of articles had specified adjustment for confounding factors [15
]. A further study found similar percentage of adjusted analysis in clinical trials of traumatic brain injury [16
]. To our knowledge, these three studies are the only previous such studies addressing this issue. By including journals from all specialties, we believe that the frequency of adjusted analysis in our cohorts is representative of the overall randomised trial literature.
We found that analyses specified in the Methods sections did not necessarily reflect how the results reported in the Results section were obtained. Often the method was either not clearly specified or the results were obtained from different analyses from the specified ones. Readers often trust that the results were derived from analyses specified in the Method section. Our findings have shown that further clarification for reporting results is needed; especially in studies involving adjusted analysis.
Although many authors have discussed how adjusting for baseline covariates in the analysis of RCTs can improve the power of analyses of treatment effect and account for any imbalances in baseline covariates [4
], the debate on whether this practice should be carried out remains unresolved. Many recommend that the analysis should be undertaken only if the methods of analysis and choice of covariates are pre-specified in the protocol or statistical analysis plan [1
]. Unfortunately, the rationale for adjustment and choice of covariates were missing in most of the articles we reviewed, although there has been an improvement in the overall reporting of adjusted analysis in trial reports published in 2006 compared to 2000. This lack of pre-specification echoes the findings in the recent review carried out by Chan et al [20
]. They found that most trials that mentioned adjusted analysis in either the protocol or article had discrepancies between the two (18/28). Among 18 trials with published adjusted analyses, 12 included covariates that were not pre-specified in the protocol ten of which did not mention any adjusted analysis in the protocol.
Most articles that gave their reason for adjustment or choice of covariates were not in accordance with the guidelines' recommendations [6
]. Few studies performed and reported the adjusted analysis adequately. For example, where procedures such as stratified randomisation or minimisation methods were used, the analysis without adjustment of stratifying variables could over-estimate the standard error of the treatment effect as well as distort the P-value [21
]. Our findings indicate that trials that performed these procedures often did not adjust for stratification/minimisation factors. Furthermore, covariates assessed after randomisation require special attention because their relationship with the study outcome could be confounded by treatment; a different analytical approach is needed [6
]. However, we found that some trials included such covariates in the analyses, as has been documented by others [24
Generally, the reporting of adjusted analysis was comparable between the two cohorts we reviewed, which represent trials published before and after the revision of the CONSORT Statement in 2001. Reporting of the main results, such as treatment group summary statistics, treatment effect and confidence intervals, as suggested by CONSORT, were often lacking or unclear in both the Results section and abstract. Such deficiencies could be due to the fact that much more attention has been given to other issues, such as adequacy and transparency of sample size calculation, blinding and randomisation methods, etc, that have already been addressed more often in other systematic reviews [26
]. Treatment effect estimates from unadjusted and adjusted analyses are not directly comparable because the former gives population-averaged estimates of treatment effect while the latter assesses subject-specific estimates, so it is important that these results are reported clearly so that the treatment effect can be interpreted correctly. This argument is most pertinent in analyses of RCTs with non-continuous outcomes because the treatment effect estimate changes when covariates are included in the analysis [3
There is little previous evidence about the use and reporting of adjusted analysis in RCTs (19). However, two recent studies reported the impact of selective reporting of adjusted estimates in meta-analyses of observational studies [28
]. Both studies found that the pooled unadjusted effects differed according to whether studies contributed both adjusted and unadjusted estimates to the meta-analyses or only unadjusted effects. To what extent this lack of clarity in reporting adjusted analyses in RCTs could represent reporting bias that may affect subsequent meta-analyses is unclear. We appreciate that unclear reporting of results does not necessarily reflect poor research conduct, but there is clear evidence suggesting that quality of reporting is associated with bias in the estimation of treatment effect [12
We identified slightly better reporting of key methodological items in CONSORT endorsing as opposed to non CONSORT endorsing journals. However, because there was a time-lag between article publication (December 2006) and when the journal 'Instructions to Authors' were assessed (June 2008) these results should be viewed with some caution. A limitation of this study is that, apart from the trial characteristics for the 2006 cohort, data were extracted by a single reviewer. However, the reviewer revisited the data extraction a few months after the first extraction as a quality assurance procedure. We also used slightly different sampling techniques between the two years. The 2000 cohort included all reports of randomised trials published in December 2000 and indexed in PubMed by July 2002 to account for the lag in PubMed indexing. For pragmatic reasons, the 2006 cohort included those trials indexed in PubMed in December (as of March 2007). This meant that we were able to capture our sample of trials within one search but may have missed a small number of trials which were published in December 2006 but indexed in PubMed after March 2007.
In conclusion, there was no evidence of change in the reporting of adjusted analysis results five years after the revision of CONSORT Statement. Furthermore, overall quality of reporting of adjusted analysis and adherence to CONSORT recommendations remain low. The rationale for covariate adjustment, methods of analysis and choice of covariates for adjustment should be fully reported so that readers can assess whether the adjusted analysis has been adequately carried out and, therefore, should be made transparent in the trial reports. Finally, both unadjusted and adjusted results, which analysis represents the primary analysis, and whether the adjusted analysis was pre-specified in the protocol should also be included in the report.