The practice of medicine is informed by new research findings, and physicians incorporate the evidence into practice after assessing the quality of evidence as reported in the peer-reviewed publications. That is, quality of reporting is vital because the users of research evidence (i.e., physicians, patients, policy- makers) make decisions on the basis of their confidence in the accuracy of a given research paper. Our findings show that the quality of reporting of the NCI cooperative groups is rather poor. Publications often omit important methodological features which are critical to decision making. However, the findings also suggest that poor reporting of RCTs does not correlate with actual superior methodological quality of trials. Our study was limited to the cohort of RCTs for which both the protocol and publication were available. As a result, we excluded 117 RCTs (276 comparisons) from our analysis (). However, the methodological quality of reporting in these 117 RCTs was similar to the RCTs included in our analysis (data not shown). Also, details related to generation of randomization sequence were reported poorly in protocol and publications which may be an artifact of the strict definitions we applied towards the assessment of methodological quality. Additionally, NCI cooperative groups with a centralized mechanism with common and standardized methods for randomization might find it unimportant to report these details in protocols as well.
Our study results also show that the association of methodological quality of reporting and treatment effect is spurious as evident from the actual high methodological quality of conduct of these RCTs. That is, we found a statistically significant association between treatment effect size and reporting (published data only) of methodological quality domains of allocation concealment and blinding. That is, poor reporting of allocation concealment and blinding inflates the ES. However, assessment of methodological quality using data from protocols and publications showed no impact of methodological quality on treatment effect size. Results also showed that for the cohort of RCTs conducted by the NCI cooperative groups the published methodological quality was also associated with trial sample size. In contrast, taking into account the actual methodological quality the association between median sample size and quality domains was non-existent. That is, we found a statistically significant variation in the distribution of median RCT sample size based on the reporting (published data only) of methodological quality domains such as choice of the comparator (placebo) and description of blinding procedure (). However, assessment of methodological quality using data from protocol and publications showed no variation in the sample size across RCTs (). The results further emphasize the importance of including protocols and publications for assessment of methodological quality.
Our findings are in line with previous research on the topic. [5
] However, except one study that focused on radiation oncology trials [5
] none of the previous study used study protocols for assessment of actual methodological quality i.e. reported a comparison of the quality of reporting with the methods specified in the original research protocols. The current study is also largest to date on the subject.
It has been previously shown that low methodological quality of RCTs inflates the treatment effect. For example, Colditz et. al. found that double-blind RCTs had smaller ES than nonblinded trials.[19
] Similarly, Schulz et. al. reported that inadequate allocation concealment accounted for a substantial increase in ES.[20
] Allocation concealment has shown the most consistent associations with treatment effect sizes.[21
] However, all of these studies have assessed the methodological quality of reporting only. The relative impact of the actual methodological quality of conduct versus the methodological quality of reporting on ES has not been studied. Our results show that the assessment of the impact of methodological quality on the ES based only on the quality of reporting can produce spurious results. These findings are important for meta-epidemiologic research and further development of quality assessment tools. Researchers may need to revise the current strategy of assessment of methodological quality based on reporting only. In instances where the study protocol is available, the assessment of methodological quality of conduct may augment the overall assessment of methodological quality of these studies.
Our results also showed that median sample size of a RCT was also associated with reported methodological quality domains of blinding and the choice of the comparator (placebo versus active treatment). However, there was no association when actual methodological quality of trials as reported in the protocols was assessed. We are aware of one study by Singh et. al. which concluded that RCT sample size was related to methodological quality domains of blinding and use of ITT principle of data analysis.[9
] This study also reported that sample size was an independent predicator of positive result in a RCT. In contrast, our findings suggest that the reported RCT sample size is not associated with outcomes (P = 0.11). The discrepancy in findings between the study by Singh et al. and ours could be attributed to the facts that this study reviewed only arhtroplasty RCTs and dichotomized RCT sample size as large (≥ 100 patients) versus small sample size (≤ 100 patients).[9
] Additionally, the study by Singh et. al. based their analysis on published methodological quality only and did not have access to study protocols to determine the actual methodological quality of RCTs.[9
In summary, we report the first comprehensive, formal investigation of the methodological quality of RCTs conducted by the NCI cooperative groups. While the results show that, RCTs conducted by the NCI cooperative groups are of high quality the published reports have deficiencies in their description of the actual methods used in the RCTs. It is important to note that these RCTs were of high quality even before publication of the Consolidated Standards of Reporting of Trials (CONSORT) statement in 1996.[22
] Our findings indicate that although investigators in NCI cooperative groups were attentive to the critical aspects of the design and conduct of RCTs, they were less aware of the need to report these features. Given that physicians, policy-makers, guidelines panels, systematic reviewers and other users of evidence rely on published reports only, this is important findings that can be immediately rectified by the NCI “officially” adopting the CONSORT reporting guidelines. Our study also highlights the pitfall of current practices of appraisal of based on methodological quality of reporting only without considering the actual methodological quality of conduct evident from research protocols.
These findings underline the overall need for enhanced adherence to the revised CONSORT statement by both authors and journal editors which will allow transparent evaluation of RCTs.[23
] The results also underscore the importance of publication of RCT protocols in the public domain. Publication of study protocols will not only help decision makers in interpreting the results from RCTs transparently and efficiently, but might also enhance prospects for collaboration thereby decreasing replication of research effort, improve RCT recruitment, and reduce bias in the reporting.[24
] These findings are also important for further development of the methodological quality appraisal tools such as Cochrane “risk of bias” algorithm and their application in assessment of the actual methodological quality versus quality of reporting.