Our study evaluated the publication of 909 clinical trials identified in FDA medical and statistical review documents in support of 90 new drug products approved between 1998 and 2000. We found that after a minimum of 5.5 y of follow-up after FDA approval, we identified publications from 43% of the trials in the medical literature. For pivotal trials, which are more clinically informative than nonpivotal trials, we found publications from 76% of the trials. For one of the 90 approved new drugs, we could not find any published supporting trial. We also found strong evidence of publication bias: trials with statistically significant results were more likely to be published than trials with nonsignificant results, as were trials with larger sample sizes. There was a weak suggestion that the effect of sample size might be less among trials with statistically significant findings, but p
-values for such interactions did not reach statistical significance. Our study therefore shows that previous findings of publication bias of trials supporting the regulatory applications of selected drug classes (e.g., antidepressants) [10
] are broadly true across a diverse group of drug classes. Publication bias may lead to an inappropriately favorable record in the medical literature of a drug's true risk/benefit profile relative to other standard therapies, and may thus lead to preferential prescribing of newer and more-expensive treatments. We could not test whether similar publication bias exists for trials supporting unsuccessful new drug applications because adequate information about these applications was unavailable from the FDA or other government or commercial sources.
We also found the reporting of clinical trials in the FDA review documents and drug labels to be variable in detail and content, and not an adequate substitute for full publication in the medical literature. For example, reporting ranged from detailed descriptions of a trial's study design, intervention, patient population, statistical analyses, adverse events, primary outcomes, and other results, to brief statements that only summarized a trial's primary outcome. We also noted sections of redacted information in the FDA review documents. Neither the FDA review documents nor the drug labels followed a standard format for reporting a trial's methodology and results. Use of guidelines such as the revised CONSORT (Consolidated Standards of Reporting Trials) [18
] may help to improve the quality and completeness of trial reporting in FDA review documents as others have proposed [19
Our study has several limitations. First, we may have misclassified some published trials as being unpublished because of difficulties in matching publications to incomplete trial descriptions in the FDA documents. Also, we did not search other databases such as the European EMBASE, nor did we contact investigators or sponsors to determine publication status or verify that a trial was not published or in press. Thus, we are likely to have underestimated the overall publication rate of these trials. However, we believe that for clinicians and policy makers, the most relevant publication rate is not the overall rate but the publication rate in journals that a typical clinician, consumer, or policy maker would have access to through a reasonable literature search. We believe our searches of PubMed, the Cochrane Library, and CINAHL reflect such a reasonable search. It would not be reasonable to expect a clinician, consumer, or policy maker to contact investigators or sponsors to determine a trial's publication status.
A second limitation of our study is our follow-up time of 5.5 to 8.5 y after new drug approval may be inadequate. However, we found that publications occurred almost exclusively within the first 3 y after approval, making it unlikely that longer follow-up would yield many additional publications. Third, time-to-publication is ideally counted from the date of trial completion, but we were unable to obtain these dates reliably. Moreover, we believe the month of approval is the most relevant time point when trial results should be available to the public. Fourth, our study focused on publications in the medical literature, but some companies have started making their trial results publicly available directly on their own Web sites. For example, the pharmaceutical industry's Clinical Study Results Database contains summaries of “hypothesis-testing” trials completed since October 2002 for many pharmaceutical products [20
]. We searched this database for the 515 unpublished trials and found summaries for 22 (4%) of them. The effect of this and other related Web sites on public disclosure of trial data submitted to the FDA requires further research as the information reported in these databases may not be peer reviewed and there is no guarantee that the reporting is complete for all relevant data. Fifth, we could not determine the statistical significance of the findings of a substantial proportion of the studies. We did, however, obtain qualitatively similar results when we performed a sensitivity analysis by counting unknown statistical significance as a valid third category. Finally, our findings cannot be generalized to any specific product, company, institution, organization, or investigator.
Despite these limitations, our study provides ample evidence that in the years immediately following FDA approval that are most relevant to public health, there exists incomplete and selective publication of trials supporting approved new drugs. Potential reasons for this publication bias may include the tendency of investigators and sponsors to delay or not submit trial reports [21
], or the motivation of commercial sponsors to publish positive trials in prestigious journals to obtain article reprints for marketing [23
]. Bias in editorial decisions toward publishing positive results is also possible, although there is evidence suggesting that this is not the case [24
]. Regardless of the cause, publication bias harms the public good by impairing the ability of clinicians and patients to make informed clinical decisions, and the ability of scientists to design safer and more efficient trials based on past findings. Publication bias can thus be considered a form of scientific misconduct [5
Potential Effects of Mandatory Results Reporting on Publication Bias
As discussed above, the FDA Amendments Act of 2007 mandates basic public results reporting for all trials supporting FDA-approved drugs and devices. Our study shows that this legislation was necessary because current reporting is marked by pervasive publication bias of positive over negative trials. Moreover, because published trial reports are often incomplete [26
] and have been shown to selectively report favorable outcome results [27
], the published evidence supporting FDA-approved drugs may be even more skewed than our results suggest. By ensuring the reporting of all predeclared primary and secondary outcomes regardless of their direction of benefit, the new law should go a long way toward correcting this skew.
We anticipate that the new law will also speed the dissemination of trial information. Currently, according to our data, 40% of the trials that were eventually published were published more than 1 y postapproval (34% of pivotal trials). Under the new law, basic results for all trials must be posted by 1 y after trial completion or approval of the drug or device. This suggests that for all trials that the sponsor wishes to publish, the manuscripts will have to be submitted for peer review before the 1 y postapproval mark if they hope to allay journal concerns about publishing trials whose primary and secondary outcome results have already been publicly posted. Thus, we would expect the time-to-publication curves in and to shift left.
Paradoxically, however, this new law may increase rather than decrease publication bias. Might sponsors feel less compelled to publish equivocal trials because the basic results will already be in the public domain? Might the time pressure to submit manuscripts by 1 y postapproval focus sponsor efforts even more on submitting positive trials and trials of greatest interest to journals? Might the journals, if they accept manuscripts of trials with publicly posted results, change the criteria by which publication importance is judged, and how might this affect acceptance rates [28
]? When more detailed protocol information must also be posted on ClinicalTrials.gov, to start no later than October 2010, the effect on publication practices is even harder to anticipate. Our data document the current degree of publication bias and provide a baseline for assessing the evolving publication practices of trials supporting FDA-approved drugs as mandatory basic results reporting takes effect.