In this randomized study, we found that practicing internists understood and appreciated methodologic differences when they read abstracts describing hypothetical studies of new drugs. They discounted small, poorly designed trials and assigned greater validity to large trials that tested clinical end points. We also found that respondents downgraded the credibility of industry-funded trials, as compared with the same trials randomly characterized as having NIH funding or having no source of support listed. The magnitude of this reduction in perceived methodologic rigor was about the same as that for low-rigor trials as compared with medium-rigor trials. Physicians’ skepticism of industry-funded research affected their responses to high-rigor and low-rigor trials similarly.
Well-publicized controversies related to industry-funded research may help explain these findings. Reports have emerged of trials that with-held critical data35–37
or that presented positive results while withholding negative results.38,39
Other concerns stem from reports of industry-financed articles that were ghostwritten40
or published primarily as instruments of marketing.41
Physicians’ skepticism of industry-funded research may be a response to such trends.
These findings have important implications. Despite the occasional scientific and ethical lapses in trials funded by pharmaceutical companies, it is also true that the pharmaceutical industry has supported many major drug trials that have been of particular clinical importance.42
Excessive skepticism concerning trials supported by industry could hinder the appropriate translation of the results into practice. For example, after publishing the results of a large, well-designed trial describing a new use for a widely prescribed class of drugs,43
a leading biomedical journal noted that many of its readers believed that the results of the trial did not justify a change in clinical management, citing industry funding as a key reason for this conclusion.44
The results of our study were based on physicians’ responses to descriptions of hypothetical trials of three new drugs, each of which was described in a single trial with varying attributes, 45
whereas actual prescribing behavior integrates drug information from many sources. Nevertheless, prescribing decisions made when a drug is first approved may rely principally on a single published study, as presented in these scenarios. Our response rate was similar to the mean rate in published surveys of physicians,46
and our respondents were similar to other internists in terms of the characteristics we measured.47
However, unmeasured variables may have differed between respondents and nonrespondents, contributing to bias in our sample. Finally, the findings from this survey of board-certified internists may not be generalizable to other specialties.48
Pharmaceutical companies seeking to enhance the appropriate use of important new products or to expand the appropriate uses of existing products must address the attitudes that our survey revealed,49,50
so that the credibility of the results of industry-supported trials is more likely to be based on methodologic rigor than on funding sources. Exactly how to change such attitudes was not the subject of this research. Currently, journal reviewers and editors, those who conduct systematic reviews, or even interested physicians can refer to the ClinicalTrials.gov
database to see whether trial data as reported reflect the planned study design.51
This retrospective check could alleviate concerns about the possibility that trial outcomes were changed after the data were gathered and analyzed. However, the information provided to this database may have missing values or may be of poor quality.51,52
We do not have empirical data that address whether concordance between the study design and the reporting of results influences physicians’ perceptions of methodologic rigor.
We found that physicians assigned the highest level of credibility to NIH-funded trials. Thus, an increase in the number of clinical trials funded by the NIH or by the new Patient-Centered Outcomes Research Institute might reduce clinicians’ skepticism and lead to more data-driven changes in practice.53
Despite the initial financial outlay, such publicly funded trials are likely to save more than they cost.54
Partnerships between the NIH and industry55
may also serve this purpose if their jointly funded trials feature characteristics that are a routine part of NIH-funded trials, including data and safety monitoring boards and public reporting of protocols.
It is reassuring that the physicians in our study were attentive to the level of methodologic rigor. They also clearly took notice of funding sources for trials, according greater credibility and import to NIH-funded research than to industry-funded research. Although attention to potential sources of bias is necessary, such skepticism apparently can also reduce the credibility and acceptance of even high-quality research that is industry-supported. Financial disclosure is important, but more fundamental strategies, such as avoiding selective reporting of results in reports of industry-sponsored trials, ensuring protocol and data transparency, and providing an independent review of end points, will be needed to more effectively promote the translation of high-quality clinical trials — whatever their funding source — into practice.