Our study found no association between the source of funding of “published” RCTs of drug therapy for RA and their outcome. A trend towards publication bias was found in the industry funded RCTs. Industry funded RCTs reported significantly better on the performance of certain methodological quality measures.
Our finding of industry as the funding source for majority of the published as well as registered RCTs is consistent with the trend for increased proportion of biomedical research to be industry funded (1
). The significant differences in RCT characteristics associated with funding source has important implications for the nature of RCTs conducted, and hence the evidence generated for clinical care of RA patients. While industry funded RCTs predominantly focused on assessment of efficacy and safety of newer therapeutic drugs, majority of non-profit funded RCTs evaluated established drugs and different strategies to use drugs for RA treatment. Evidently, industry funded RCTs had more financial resources as they were more likely to be multicenter, multinational and with higher subject enrollment. Despite the financial advantage, industry funded RCTs had shorter study duration than non-profit source funded RCTs. These differences clearly highlight the importance of both industry and non-profit sources for funding of RCTs to generate efficacy and safety evidence for newer as well as established drugs and strategies for their use in clinical care.
Though preponderance of data in medical literature shows that industry funding leads to higher chances of pro-industry results and conclusions (4
), we did not find any association between the funding source and the study outcome of “published” RCTs of RA drug therapies. Adjustment for differences in RCT characteristics and reported methodological quality measures did not affect this finding. 1850 RCTs (with 80% power) would be needed to show a significant association between funding source and study outcome assuming relative frequency of explicitly stated industry and non-profit funding (about 3:1), and percentage with positive outcomes (75.5% and 68.8%) similar to our study. Hence, among “published” RA drug therapy RCTs relatively small difference exists in the study outcomes between those with industry and nonprofit funding source. One potential reason for the lack of association between funding source and study outcome could be publication bias. Indeed, we did find that industry funded “registered” RCTs at CTG had significant trend towards non-publication. Since these “registered” RCTs had investigator declared “completed” status, non-publication of their results suggests unfavorable outcome. We could not ascertain whether “published” RCTs more commonly presented outcomes that were favorable but different from the originally planned primary outcomes thus inflating the frequency of positive “published” RCTs as only few “published” RCTs had actually registered at CTG. Further studies are needed to address the extent and implications of publication bias in RA RCTs.
Nearly 75% of “published” RCTs had positive outcome. This could partly be from publication bias and partly from the difficulty in study outcome assignment due to the complex structure of study intervention arms. Majority of RCTs had > 2 intervention arms. The ED often showed positive results compared only to the placebo but not to the ACD, or only the combination of ED and ACD had positive results compared to ED or ACD alone. Most published RCT reports lacked clear description of the a priori
intent of the RCT (superiority vs non-inferiority for different intervention arms). Thus in absence of such guidance, positive RCT outcome was assigned when any ED intervention arm (alone or in combination with ACD) showed a statistically significant result favoring the primary outcome. Finally, conducting RCTs with such high frequency of positive outcome raises ethical issues. An RCT should only be conducted if there is substantial uncertainty (equipoise) about the relative value of one treatment versus another (17
). RCTs in which experimental intervention and control are thought to be non-equivalent based on the existing fund of knowledge may cause unnecessary harm to study subjects and waste precious resources.
A study of 240 RCTs of rheumatic diseases found no difference in any methodological quality measure between manufacturer-supported and non–manufacturer-supported RCTs (23
). A more recent study of 64 systemic lupus erythematosus RCTs found a trend towards better study quality in pharmaceutical company-supported RCTs (18
). However, our study found that industry funding was associated with better reporting of some key RCT methodological quality measures. This could be due to several potential reasons. First, availability of greater financial resources to industry funded RCT investigators may allow performance of more expensive measures such as double-blinding and more vigorous tracking and follow-up of study subjects. Second, non-profit funded RCTs studied strategies to use drug therapy for RA more often than industry funded RCTs [10 (25%) vs 4 (6%)], and double-blinding was considered impractical by the investigators for most such RCTs due to the complexity of study protocol requirements. Indeed, only one industry and one non-profit source funded strategy RCT was conducted in a double-blind fashion. Third, conceivably the mandates of the regulatory organizations such as Federal Drug Administration for methodologically rigorous RCTs to generate efficacy and safety data of a new drug may also account for better quality of the industry funded RCTs (33
). Fourth, better reporting of methodological aspects in the “published” RCTs may also reflect attempts to dispel notions of bias that tend to be associated with industry funding. Finally, since we assessed the RCT methodological quality from the published manuscript, we cannot be certain whether our findings represent incomplete reporting or inadequate performance of these measures. However, measures such as ITT analysis can be performed without additional financial burden and can be ascertained from the published report itself. Yet, lower proportion of non-profit funded RCTs reported ITT analysis performance suggesting that funding source may be associated with real systematic differences in the RCT methodological quality measure performance.
The overall reporting of most RCT methodological quality measures, particularly for random sequence generation and allocation concealment, was suboptimal. Poor reporting/performance of RCT methodological quality measures have been reported across multiple specialties including rheumatology (19
). Encouragingly, our study found improvement in several quality measures: randomization (35% vs 17.4%), allocation concealment (30.1% vs 19%), participant flow (77.7% vs 58.7%), and ITT analysis (64.1% vs 29.8%) when compared to 121 rheumatology RCTs published in the years 1997–1998 (23
). However, only 38.8% were RA RCTs and non-drug therapy RCTs were included in the referenced study. Hence, the above comparison may not represent true secular changes in quality of reporting of RA RCTs.
The CONSORT statement was developed to promote standardized RCT reporting that would help readers assess its validity and interpret the results appropriately. It was originally proposed in 1996 with subsequent revisions in 2001 and 2010 (24
). The current CONSORT statement has recommended list of 25 items and a flow diagram (36
). CONSORT guidelines adoption by biomedical journals has been shown to improve reporting quality particularly of randomization and double-blinding (37
). However, the improvements have been inconsistent with continued suboptimal reporting of measures like allocation concealment (37
). Nonetheless, authors of RCT manuscript should be encouraged to strictly adhere to the CONSORT guidelines for improving RCT reporting quality.
Our study has some limitations. Nearly one-fifth of the “published” RCTs had no funding source disclosure. We considered these to be non-profit funding source for most analyses. Plausibly, some of these were industry funded. For sensitivity analysis, we reassessed our study results considering an extreme scenario of industry funding of all such RCTs. This did not alter our finding of lack of association of funding source with the RCT outcome. However, differences in the study quality measures were attenuated and remained significant only for ITT analysis performance in favor of industry funding. An improvement in funding source reporting is expected since this is mandatory for CTG registration, and the 2010 CONSORT statement has added an item explicitly for funding source reporting (32
We assessed the RCT methodological quality based upon its published report. Plausibly, authors may not have reported important quality measures despite their adequate performance causing underestimation of the study quality. In fact, discrepancy has been noted between methodological aspects of the “published” RCT reports and their study protocol or the RCT investigator’s report of the actual study methods (42
). However, overwhelming majority of healthcare literature users rely upon RCT’s published report to assess its quality and validity, and do not have access to the study protocol or RCT investigators. Hence, inadequate RCT reporting hinders assessment of its quality and validity even though it may have been appropriately conducted.
We did not evaluate conclusion or recommendations offered by RCT’s authors in discussion section or abstract of the manuscript. Conclusions of RCTs with for-profit organization funding are more likely to recommend ED as the treatment of choice unrelated to the observed effect size (6
). Finally, there is issue of the how best to assess quality of RCTs. Inadequate performance of the quality measures used in our study may bias estimates of treatment effect (13
). However, the association with treatment effect size is not consistent across different specialties, varies for individual quality measures, and dependent upon whether the study outcome is subjective or not (45
). Moreover, different groups vary considerably in methods to assess RCT quality (47
). Some groups assess quality measures individually, an approach recommended by the Cochrane Collaboration, while others use a composite quality scale (47
In conclusion, industry funding of “published” RCTs of RA drug therapy was not associated with higher likelihood of positive outcomes favoring the sponsored ED. A trend towards higher non-publication rate of “registered” industry funded RCTs suggests that publication bias partially explains the observed lack of such association. Availability of adequate funds for RCT conduct from both industry and non-profit sources is essential to generate evidence for optimal advancement of RA treatment. Improvement in reporting of methodological quality measures is needed to enable better validity assessment of RCTs.