|Home | About | Journals | Submit | Contact Us | Français|
Conflicting reports exist in the medical literature regarding the association between industry funding and published research findings. In this study, we examine the association between industry funding and the statistical significance of results in recently published medical and surgical trials.
We examined a consecutive series of 332 randomized trials published between January 1999 and June 2001 in 8 leading surgical journals and 5 medical journals. Each eligible study was independently reviewed for methodological quality using a 21-point index with 5 domains: randomization, outcomes, eligibility criteria, interventions and statistical issues. Our primary analysis included studies that explicitly identified the primary outcome and reported it as statistically significant. For studies that did not explicitly identify a primary outcome, we defined a “positive” study as one with at least 1 statistically significant outcome measure. We used multivariable regression analysis to determine whether there was an association between reported industry funding and trial results, while controlling for study quality and sample size.
Among the 332 randomized trials, there were 158 drug trials, 87 surgical trials and 87 trials of other therapies. In 122 (37%) of the trials, authors declared industry funding. An unadjusted analysis of this sample of trials revealed that industry funding was associated with a statistically significant result in favour of the new industry product (odds ratio [OR] 1.9, 95% confidence interval [CI] 1.3–3.5). The association remained significant after adjustment for study quality and sample size (adjusted OR 1.8, 95% CI 1.1–3.0). There was a nonsignificant difference between surgical trials (OR 8.0, 95% CI 1.1–53.2) and drug trials (OR 1.6, 95% CI 1.1–2.8), both of which were likely to have a pro-industry result (relative OR 5.0, 95% CI 0.7–37.5, p = 0.14).
Industry-funded trials are more likely to be associated with statistically significant pro-industry findings, both in medical trials and surgical interventions.
Academics are debating whether industry funding influences research findings and conclusions.1,2,3,4,5,6,7 Three recent studies1,2,3 demonstrated a statistically significant association between industry funding and authors' conclusions in medical randomized controlled trials (RCTs). For example, Kjaergard and Als-Nielsen1 reviewed 159 RCTs in 12 medical specialties reported in the British Medical Journal. Adjusting for study quality, sample size, type of intervention and medical specialty, they found that industry funding was significantly associated with the authors' conclusions. However, another review4 of 100 RCTs published in general medical journals did not find an association between trial outcome and industry funding (p = 0.46). In a recent meta-analysis, Bekelman and colleagues8 pooled 1140 original studies and found a statistically significant association between industry sponsorship and pro-industry conclusions (pooled odds ratio [OR] 3.60, 95% confidence interval [CI] 2.63–4.91).
It is not known whether these findings apply to other areas, such as surgical RCTs. Although over 70% of drug trials are funded by industry, fewer surgical trials receive such financial support.4 To determine whether associations between industry funding and authors' conclusions can be generalized to other specialties, we examined a consecutive series of medical and surgical trials for such associations.
We identified RCTs published between January 1999 and June 2001 in 8 leading surgical journals (Journal of Bone and Joint Surgery [American and British volumes], Clinical Orthopaedics and Related Research, Acta Orthopaedica Scandinavica, Annals of Surgery, American Journal of Surgery, Plastic and Reconstructive Surgery and Journal of Neurosurgery) and 5 medical journals (Lancet, British Medical Journal, Journal of the American Medical Association, Annals of Internal Medicine and New England Journal of Medicine). The choice of surgical journals was based on perceived quality and impact factor revealed in a local survey of surgeons at the McMaster University Medical Centre. The choice of medical journals was also based on impact factor. We identified eligible RCTs by conducting manual and MEDLINE searches of these journals.
Each eligible study was independently reviewed for methodological quality; differences were resolved by discussion, and consensus was reached. We abstracted the following information from eligible trials: funding sources (industry-for-profit, not-for-profit, undeclared), statistical significance of outcome measures, study-quality score (using the Detsky quality index9), sample size, whether a priori sample-size calculations were conducted and type of intervention (drug trial, surgical trial and nonsurgical, nondrug therapy [e.g., physiotherapy trial, educational intervention]). Surgical interventions were in the fields of plastic, orthopedic, neuro- or general surgery and involved a comparison of any device, implant or technique that required a surgical procedure. A study was recorded as funded by industry if this was explicitly stated in the paper.
The Detsky quality index includes 15 items in 5 domains: randomization; outcome measures; eligibility criteria and reasons for patient exclusion (withdrawal or dropout); interventions; and statistical issues. Each domain has equal weight (4 points each). The final domain (statistical analysis) contains an extra question for trials in which findings are not significant. Thus, the maximum possible scores for statistically significant and statistically nonsignificant trials are 20 and 21, respectively. The 2 raters did not receive specific training in the use of this instrument; however, they carefully reviewed the guidelines for scoring with this index.
Our primary analysis was of studies that explicitly identified the primary outcome and reported it as statistically significant. In studies that did not explicitly identify a primary outcome, we defined a “positive” study as one with at least 1 statistically significant outcome measure. Our secondary analysis considered the number of statistically significant outcomes as a proportion of all outcomes measured in the study. In addition, we further examined whether the statistically significant outcome(s) were in favour of the industry sponsor.
We measured inter-reviewer agreement on the decision to include studies in the review and on data abstraction for 20 RCTs reviewed in duplicate. The inter-observer agreement was measured using the weighted kappa (κ) statistic with quadratic weights. Agreement between 2 reviewers in methodological quality scores (Detsky index) was evaluated using intraclass correlation coefficients (ICCs). We compared proportions using χ2 analysis and means using analysis of variance. The Bonferroni method was used to correct for multiple comparisons. We conducted both unadjusted and adjusted logistic regression analyses to determine variables associated with a statistically significant study result. Initially, univariable analyses were conducted to identify factors (i.e., industry funding, study quality, sample size, type of intervention) associated with a significant study result. In an adjusted analysis, we evaluated the effect of industry funding in a multivariable logistic regression model that included sample size, study design and type of intervention.
We identified 332 RCTs: 158 drug trials, 87 surgical trials and 87 trials of other therapies. Reviewers achieved an excellent level of reliability in the identification of potential studies, data abstraction and determination of study results (κ = 0.83, 0.84 and 0.88 respectively). Agreement on assessment of methodological quality was also substantial (ICC 0.79). Quality scores for surgical trials were significantly lower than those for drug trials or nonsurgical, nondrug trials (p < 0.01) (Table 1). Investigators of surgical trials were significantly less likely (p < 0.01) to report a priori sample-size calculations (24%) than investigators of either drug (61%) or nonsurgical, nondrug trials (57%).
In 122 (37%) of the 332 trials, authors declared industry funding (Table 1). Drug trials were significantly more likely than surgical or other trials to have declared industry funding (p < 0.01). Of the 122 trials declaring industry funding 48 (39%) favoured the new treatment or industry product. An unadjusted analysis of these RCTs revealed that industry funding was significantly associated with a statistically significant result (OR 1.9, 95% CI 1.3–3.5) in favour of the new industry product. After adjustment for sample size, study quality and type of intervention, those trials reporting industry funding remained significantly more likely to have a statistically significant pro-industry result (adjusted OR 1.8, 95% CI 1.1–3.0). Although the point estimate of the odds of a pro-industry conclusion in surgical trials was 5 times greater than the point estimate in drug trials (OR 8.0, 95% CI 1.1–53.2 v. OR 1.6, 95% CI 1.1–2.8 respectively), this difference was not statistically significant (relative OR 5.0, 95% CI 0.7–37.5, p = 0.14).
In a review of 332 RCTs in both surgery and medicine, the authors' declaration of industry funding was significantly associated with statistically significant pro-industry results. Adjusting for variations in study quality and sample size across studies further strengthened the results. Our results fail to support the belief that variations in study quality or sample size (study power) can explain differences in trial results across industry-funded and non-industry-funded trials.
However, our findings are limited by the quality of reporting of industry funding in the journals publishing these RCTs. We reviewed each journal's “Information to Authors” section to identify policies on conflict of interest and disclosure of funding. Of 8 surgical journals, 6 required disclosure of industry funding and 2 suggested that such disclosure was appropriate. These 2 journals (Journal of Bone and Joint Surgery [British volume] and Acta Orthopaedica Scandinavica) also allowed a no-response category regarding financial disclosure. Of the 5 medical journals, 3 required disclosure of industry funding and the other 2 (JAMA and New England Journal of Medicine) suggested disclosure.
Our study has some limitations, such as our decision to use a composite scale to assess trial quality versus a component-oriented approach. Numerous checklists and scales have been reported for the evaluation of the quality of RCTs,10 and a major disadvantage of the Detsky scale, as with any such scale, is that assessments of quality depend on the information available in the published reports. As well, important aspects of study quality may not be captured by this index, such as the inappropriate use of placebos or inactive controls, or controls that are compromised by insufficient dosage or mode of administration. Composite quality scales, such as the Detsky index, may provide a useful overall assessment when comparing groups of trials, but they have been criticized, and it has been proposed that more rigorous evaluation may result if the relevant methodological aspects were identified, ideally a priori, and assessed individually.11
Inferences about differences in the results of surgical and medical trials may be limited by sample size. Only 16 (18%) of all 87 identified surgical trials reported industry funding. Although the point estimate of the odds ratios suggests that surgical trials are more likely than medical trials to have pro-industry results, this difference was not statistically significant and larger studies are required to explore this finding.
Our study may be influenced by selection bias, as we elected to search only high-impact journals for trials. As well, our method of determining our primary outcome measure, although strengthened by substantial inter-observer reliability, may be questionable. Some authors2 have made use of a validated scale to grade studies' conclusions regarding the extent to which an experimental intervention was favoured,1 although others4,6,8 have used an approach similar to ours.
Our findings are contradictory to those of some reports.4,5,6,7 Clifford and colleagues4 did not find that trial outcome was associated with industry funding (p = 0.46). However, their study may have been limited by type II error and limited disclosure of funding sources. Some previous studies support our results.1,2,3,8In a recent meta-analysis, Bekelman and colleagues8 pooled 1140 original studies and found a statistically significant association between industry sponsorship and pro-industry conclusions (pooled OR 3.60, 95% CI 2.63–4.91). We pooled these results with those of Clifford and colleagues4 (100 trials) and our own using a random effects model. Pooling was deemed appropriate due to a nonsignificant test of heterogeneity (p > 0.1), widely overlapping confidence intervals and similarity of point estimates. Our pooled sample of 1572 trials provides a current estimate of the impact of industry funding on authors' conclusions (pooled OR 2.3, 95% CI 1.3–4.1, heterogeneity p = 0.02) (Fig. 1).
Our findings suggest that industry funding has a significant influence on the results of both surgical trials and drug trials. Perhaps by careful selection, industry funds trials that are most likely to reveal a benefit of the experimental intervention. Results of industry-funded trials may be influenced by inappropriate choice of comparator intervention2 or by publication bias. Future exploration of the complex relation between industry-funded trials and authors' conclusions will shed further light on this issue.
β See related article page 481
Mohit Bhandari is funded, in part, by a Clinical Scientist Fellowship, Department of Clinical Epidemiology and Biostatistics, McMaster University. Jason Busse is funded by a Canadian Institutes of Health Research Fellowship Award. P.J. Devereaux is funded by a Heart and Stroke Foundation of Canada/Canadian Institutes of Health Research Fellowship Award.
This article has been peer reviewed.
Contributors: Mohit Bhandari drafted the manuscript and was responsible for the study concept, for critical revisions and for data acquisition, analysis and interpretation. He provided statistical expertise, administrative, technical and material support and study supervision. Emil Schemitsch was responsible for study concept and design, data analysis and interpretation, and critical revisions. Jason Busse and Diane Jackowski were responsible for data acquisition and critical revisions. Derek Mears was responsible for data acquisition. P.J. Devereaux was responsible for data analysis and interpretation, for critical revisions and for providing statistical expertise. Victor Montori and Holger Schünemann were responsible for data analysis and interpretation and for critical revisions. Sheila Sprague was responsible for critical revisions and provided administrative, technical and material support. Diane Heels-Ansdell provided statistical expertise, analysis and interpretation of data and critical revision of the manuscript. All authors gave approval of the final version to be published.
Competing interests: None declared.
Correspondence to: Dr. Mohit Bhandari, Department of Clinical Epidemiology and Biostatistics, McMaster University Medical Centre, 1200 Main St. W, Rm. 2C3, Hamilton ON L8N 3Z5; fax 905 524-3841; bhandari/at/sympatico.ca