PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1400253)

Clipboard (0)
None

Related Articles

1.  The natural history of conducting and reporting clinical trials: interviews with trialists 
Trials  2015;16:16.
Background
To investigate the nature of the research process as a whole, factors that might influence the way in which research is carried out, and how researchers ultimately report their findings.
Methods
Semi-structured qualitative telephone interviews with authors of trials, identified from two sources: trials published since 2002 included in Cochrane systematic reviews selected for the ORBIT project; and trial reports randomly sampled from 14,758 indexed on PubMed over the 12-month period from August 2007 to July 2008.
Results
A total of 268 trials were identified for inclusion, 183 published since 2002 and included in the Cochrane systematic reviews selected for the ORBIT project and 85 randomly selected published trials indexed on PubMed. The response rate from researchers in the former group was 21% (38/183) and in the latter group was 25% (21/85). Overall, 59 trialists were interviewed from the two different sources. A number of major but related themes emerged regarding the conduct and reporting of trials: establishment of the research question; identification of outcome variables; use of and adherence to the study protocol; conduct of the research; reporting and publishing of findings. Our results reveal that, although a substantial proportion of trialists identify outcome variables based on their clinical experience and knowing experts in the field, there can be insufficient reference to previous research in the planning of a new trial. We have revealed problems with trial recruitment: not reaching the target sample size, over-estimation of recruitment potential and recruiting clinicians not being in equipoise. We found a wide variation in the completeness of protocols, in terms of detailing study rationale, outlining the proposed methods, trial organisation and ethical considerations.
Conclusion
Our results confirm that the conduct and reporting of some trials can be inadequate. Interviews with researchers identified aspects of clinical research that can be especially challenging: establishing appropriate and relevant outcome variables to measure, use of and adherence to the study protocol, recruiting of study participants and reporting and publishing the study findings. Our trialists considered the prestige and impact factors of academic journals to be the most important criteria for selecting those to which they would submit manuscripts.
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-014-0536-6) contains supplementary material, which is available to authorized users.
doi:10.1186/s13063-014-0536-6
PMCID: PMC4322554  PMID: 25619208
Qualitative; Interviews; Trialists; Research reporting; Recruitment; Trial protocols; Equipoise
2.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
3.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
4.  Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis 
PLoS Medicine  2008;5(9):e191.
Background
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.
Methods and Findings
We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.
Conclusions
Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs.
Editors' Summary
Background.
Before a new drug becomes available for the treatment of a specific human disease, its benefits and harms are carefully studied, first in the laboratory and in animals, and then in several types of clinical trials. In the most important of these trials—so-called “pivotal” clinical trials—the efficacy and safety of the new drug and of a standard treatment are compared by giving groups of patients the different treatments and measuring several predefined “outcomes.” These outcomes indicate whether the new drug is more effective than the standard treatment and whether it has any other effects on the patients' health and daily life. All this information is then submitted by the sponsor of the new drug (usually a pharmaceutical company) to the government body responsible for drug approval—in the US, this is the Food and Drug Administration (FDA).
Why Was This Study Done?
After a drug receives FDA approval, information about the clinical trials supporting the FDA's decision are included in the FDA “Summary Basis of Approval” and/or on the drug label. In addition, some clinical trials are described in medical journals. Ideally, all the clinical information that leads to a drug's approval should be publicly available to help clinicians make informed decisions about how to treat their patients. A full-length publication in a medical journal is the primary way that clinical trial results are communicated to the scientific community and the public. Unfortunately, drug sponsors sometimes publish the results only of trials where their drug performed well; as a consequence, trials where the drug did no better than the standard treatment or where it had unwanted side effects remain unpublished. Publication bias like this provides an inaccurate picture of a drug's efficacy and safety relative to other therapies and may lead to excessive prescribing of newer, more expensive (but not necessarily more effective) treatments. In this study, the researchers investigate whether selective trial reporting is common by evaluating the publication status of trials submitted to the FDA for a wide variety of approved drugs. They also ask which factors affect a trial's chances of publication.
What Did the Researchers Do and Find?
The researchers identified 90 drugs approved by the FDA between 1998 and 2000 by searching the FDA's Center for Drug Evaluation and Research Web site. From the Summary Basis of Approval for each drug, they identified 909 clinical trials undertaken to support these approvals. They then searched the published medical literature up to mid-2006 to determine if and when the results of each trial were published. Although 76% of the pivotal trials had appeared in medical journals, usually within 3 years of FDA approval, only 43% of all of the submitted trials had been published. Among all the trials, those with statistically significant results were nearly twice as likely to have been published as those without statistically significant results, and pivotal trials were three times more likely to have been published as nonpivotal trials, 5 years postapproval. In addition, a larger sample size increased the likelihood of publication. Having statistically significant results and larger sample sizes also increased the likelihood of publication of the pivotal trials.
What Do These Findings Mean?
Although the search methods used in this study may have missed some publications, these findings suggest that more than half the clinical trials undertaken to support drug approval remain unpublished 5 years or more after FDA approval. They also reveal selective reporting of results. For example, they show that a pivotal trial in which the new drug does no better than an old drug is less likely to be published than one where the new drug is more effective, a publication bias that could establish an inappropriately favorable record for the new drug in the medical literature. Importantly, these findings provide a baseline for monitoring the effects of the FDA Amendments Act 2007, which was introduced to improve the accuracy and completeness of drug trial reporting. Under this Act, all trials supporting FDA-approved drugs must be registered when they start, and the summary results of all the outcomes declared at trial registration as well as specific details about the trial protocol must be publicly posted within a year of drug approval on the US National Institutes of Health clinical trials site.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050191.
PLoS Medicine recently published an editorial discussing the FDA Amendment Act and what it means for medical journals: The PLoS Medicine Editors (2008) Next Stop, Don't Block the Doors: Opening Up Access to Clinical Trials Results. PLoS Med 5(7): e160
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward international norms and standards for reporting the findings of clinical trials
doi:10.1371/journal.pmed.0050191
PMCID: PMC2553819  PMID: 18816163
5.  Nonadherence to treatment protocol in published randomised controlled trials: a review 
Trials  2012;13:84.
This review aimed to ascertain the extent to which nonadherence to treatment protocol is reported and addressed in a cohort of published analyses of randomised controlled trials (RCTs). One hundred publications of RCTs, randomly selected from those published in BMJ, New England Journal of Medicine, the Journal of the American Medical Association and The Lancet during 2008, were reviewed to determine the extent and nature of reported nonadherence to treatment protocol, and whether statistical methods were used to examine the effect of such nonadherence on both benefit and harms analyses. We also assessed the quality of trial reporting of treatment protocol nonadherence and the quality of reporting of the statistical analysis methods used to investigate such nonadherence. Nonadherence to treatment protocol was reported in 98 of the 100 trials, but reporting on such nonadherence was often vague or incomplete. Forty-two publications did not state how many participants started their randomised treatment. Reporting of treatment initiation and completeness was judged to be inadequate in 64% of trials with short-term interventions and 89% of trials with long-term interventions. More than half (51) of the 98 trials with treatment protocol nonadherence implemented some statistical method to address this issue, most commonly based on per protocol analysis (46) but often labelled as intention to treat (ITT) or modified ITT (23 analyses in 22 trials). The composition of analysis sets for their benefit outcomes were not explained in 57% of trials, and 62% of trials that presented harms analyses did not define harms analysis populations. The majority of defined harms analysis populations (18 out of 26 trials, 69%) were based on actual treatment received, while the majority of trials with undefined harms analysis populations (31 out of 43 trials, 72%) appeared to analyse harms using the ITT approach. Adherence to randomised intervention is poorly considered in the reporting and analysis of published RCTs. The majority of trials are subject to various forms of nonadherence to treatment protocol, and though trialists deal with this nonadherence using a variety of statistical methods and analysis populations, they rarely consider the potential for bias introduced. There is a need for increased awareness of more appropriate causal methods to adjust for departures from treatment protocol, as well as guidance on the appropriate analysis population to use for harms outcomes in the presence of such nonadherence.
doi:10.1186/1745-6215-13-84
PMCID: PMC3492022  PMID: 22709676
Causal effect modelling; nonadherence; non-compliance; trial reporting; trial analysis
6.  Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation 
PLoS Medicine  2008;5(11):e217.
Background
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.
Methods and Findings
This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).
Conclusions
Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Lisa Bero and colleagues review the publication status of all efficacy trials carried out in support of new drug approvals from 2001 and 2002, and find that a quarter of trials remain unpublished.
Editors' Summary
Background.
All health-care professionals want their patients to have the best available clinical care—but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a “New Drug Application” (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including “efficacy” trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) “outcomes.” FDA reviewers use this evidence to decide whether to approve a drug.
Why Was This Study Done?
Although the information in NDAs is publicly available, clinicians and patients usually learn about new drugs from articles published in medical journals after drug approval. Unfortunately, drug sponsors sometimes publish the results only of the trials in which their drug performed well and in which statistical analyses indicate that the drug's improved performance was a real effect rather than a lucky coincidence. Trials in which a drug did not show a “statistically significant benefit” or where the drug was found to have unwanted side effects often remain unpublished. This “publication bias” means that the scientific literature can contain an inaccurate picture of a drug's efficacy and safety relative to other therapies. This may lead to clinicians preferentially prescribing newer, more expensive drugs that are not necessarily better than older drugs. In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals. They also investigate whether there are any discrepancies between the trial data included in NDAs and in published articles.
What Did the Researchers Do and Find?
The researchers identified all the efficacy trials included in NDAs for totally new drugs that were approved by the FDA in 2001 and 2002 and searched the scientific literature for publications between July 2006 and June 2007 relating to these trials. Only three-quarters of the efficacy trials in the NDAs were published; trials with favorable outcomes were nearly five times as likely to be published as those without favorable outcomes. Although 155 primary outcomes were in both the papers and the NDAs, 41 outcomes were only in the NDAs. Conversely, 17 outcomes were only in the papers; 15 of these favored the test drug. Of the 43 primary outcomes reported in the NDAs that showed no statistically significant benefit for the test drug, only half were included in the papers; for five of the reported primary outcomes, the statistical significance differed between the NDA and the paper and generally favored the test drug in the papers. Finally, nine out of 99 conclusions differed between the NDAs and the papers; each time, the published conclusion favored the test drug.
What Do These Findings Mean?
These findings indicate that the results of many trials of new drugs are not published 5 years after FDA approval of the drug. Furthermore, unexplained discrepancies between the data and conclusions in NDAs and in medical journals are common and tend to paint a more favorable picture of the new drug in the scientific literature than in the NDAs. Overall, these findings suggest that the information on the efficacy of new drugs that is readily available to clinicians and patients through the published scientific literature is incomplete and potentially biased. The recent introduction in the US and elsewhere of mandatory registration of all clinical trials before they start and of mandatory publication in trial registers of the full results of all the predefined primary outcomes should reduce publication bias over the next few years and should allow clinicians and patients to make fully informed treatment decisions.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050217.
This study is further discussed in a PLoS Medicine Perspective by An-Wen Chan
PLoS Medicine recently published a related article by Ida Sim and colleagues: Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Med 5: e191. doi:10.1371/journal.pmed.0050191
The Food and Drug Administration provides information about drug approval in the US for consumers and for health-care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
NDAs for approved drugs can also be found on this Web site
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.0050217
PMCID: PMC2586350  PMID: 19067477
7.  Obstacles to researching the researchers: A case study of the ethical challenges of undertaking methodological research investigating the reporting of randomised controlled trials 
Trials  2010;11:28.
Background
Recent cohort studies of randomised controlled trials have provided evidence of within-study selective reporting bias; where statistically significant outcomes are more likely to be more completely reported compared to non-significant outcomes. Bias resulting from selective reporting can impact on meta-analyses, influencing the conclusions of systematic reviews, and in turn, evidence based clinical practice guidelines.
In 2006 we received funding to investigate if there was evidence of within-study selective reporting in a cohort of RCTs submitted to New Zealand Regional Ethics Committees in 1998/99. This research involved accessing ethics applications, their amendments and annual reports, and comparing these with corresponding publications. We did not plan to obtain informed consent from trialists to view their ethics applications for practical and scientific reasons.
In November 2006 we sought ethical approval to undertake the research from our institutional ethics committee. The Committee declined our application on the grounds that we were not obtaining informed consent from the trialists to view their ethics application. This initiated a seventeen month process to obtain ethical approval. This publication outlines what we planned to do, the issues we encountered, discusses the legal and ethical issues, and presents some potential solutions.
Discussion and conclusion
Methodological research such as this has the potential for public benefit and there is little or no harm for the participants (trialists) in undertaking it. Further, in New Zealand, there is freedom of information legislation, which in this circumstance, unambiguously provided rights of access and use of the information in the ethics applications. The decision of our institutional ethics committee defeated this right and did not recognise the nature of this observational research.
Methodological research, such as this, can be used to develop processes to improve quality in research reporting. Recognition of the potential benefit of this research in the broader research community, and those who sit on ethics committees, is perhaps needed. In addition, changes to the ethical review process which involve separation between those who review proposals to undertake methodological research using ethics applications, and those with responsibility for reviewing ethics applications for trials, should be considered. Finally, we contend that the research community could benefit from quality improvement approaches used in allied sectors.
doi:10.1186/1745-6215-11-28
PMCID: PMC2846843  PMID: 20302671
8.  Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors 
BMJ : British Medical Journal  2005;330(7494):753.
Objective To examine the extent and nature of outcome reporting bias in a broad cohort of published randomised trials.
Design Retrospective review of publications and follow up survey of authors.
Cohort All journal articles of randomised trials indexed in PubMed whose primary publication appeared in December 2000.
Main outcome measures Prevalence of incompletely reported outcomes per trial; reasons for not reporting outcomes; association between completeness of reporting and statistical significance.
Results 519 trials with 553 publications and 10 557 outcomes were identified. Survey responders (response rate 69%) provided information on unreported outcomes but were often unreliable—for 32% of those who denied the existence of such outcomes there was evidence to the contrary in their publications. On average, over 20% of the outcomes measured in a parallel group trial were incompletely reported. Within a trial, such outcomes had a higher odds of being statistically non-significant compared with fully reported outcomes (odds ratio 2.0 (95% confidence interval 1.6 to 2.7) for efficacy outcomes; 1.9 (1.1 to 3.5) for harm outcomes). The most commonly reported reasons for omitting efficacy outcomes included space constraints, lack of clinical importance, and lack of statistical significance.
Conclusions Incomplete reporting of outcomes within published articles of randomised trials is common and is associated with statistical non-significance. The medical literature therefore represents a selective and biased subset of study outcomes, and trial protocols should be made publicly available.
doi:10.1136/bmj.38356.424606.8F
PMCID: PMC555875  PMID: 15681569
9.  Predictors of clinical trial data sharing: exploratory analysis of a cross-sectional survey 
Trials  2014;15(1):384.
Background
A number of research funders, biomedical journals, pharmaceutical companies, and regulatory agencies have adopted policies advocating or mandating that clinical trialists share data with external investigators. We therefore sought to determine whether certain characteristics of trialists or their trials are associated with more unfavorable perceptions of data sharing. To date, no prior research has addressed this issue.
Methods
We conducted an exploratory analysis of responses to a cross-sectional, web-based survey. The survey sample consisted of trialists who were corresponding authors of clinical trials published in 2010 or 2011 in one of six general medical journals with the highest impact factors in 2011. The following key characteristics were examined: trialists’ academic productivity and geographic location, trial funding source and size, and the journal in which it was published. Main outcome measures included: support for data sharing in principle, concerns with data sharing through repositories, and reasons for granting or denying requests. Chi-squared tests and Fisher’s exact tests were used to assess statistical significance.
Results
Of 683 potential respondents, 317 completed the survey (response rate 46%). Both support for data sharing and reporting of specific concerns with sharing data through repositories exceeded 75%, but neither differed by trialist or trial characteristics. However, there were some significant differences in explicit reasons to share or withhold data. Respondents located in Western Europe more frequently indicated they have or would share data in order to receive academic benefits or recognition when compared with respondents located in the United States or Canada (58 versus 31%). In addition, respondents who were the most academically productive less frequently indicated they have or would withhold data in order to protect research subjects when compared with less academically productive respondents (24 versus 40%), as did respondents who received industry funding when compared with those who had not (24 versus 43%).
Conclusions
Respondents indicated strong support for data sharing overall. There were few notable differences in how trialists viewed the benefits and risks of data sharing when categorized by trialists’ academic productivity and geographic location, trial funding source and size, and the journal in which it was published.
Electronic supplementary material
The online version of this article (doi:10.1186/1745-6215-15-384) contains supplementary material, which is available to authorized users.
doi:10.1186/1745-6215-15-384
PMCID: PMC4192345  PMID: 25277128
Data sharing; Clinical trial; Data repository
10.  Factors Associated with Findings of Published Trials of Drug–Drug Comparisons: Why Some Statins Appear More Efficacious than Others 
PLoS Medicine  2007;4(6):e184.
Background
Published pharmaceutical industry–sponsored trials are more likely than non-industry-sponsored trials to report results and conclusions that favor drug over placebo. Little is known about potential biases in drug–drug comparisons. This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin–drug comparisons.
Methods and Findings
This is a cross-sectional study of 192 published RCTs comparing a statin drug to another statin drug or non-statin drug. Data on concealment of allocation, selection bias, blinding, sample size, disclosed funding source, financial ties of authors, results for primary outcomes, and author conclusions were extracted by two coders (weighted kappa 0.80 to 0.97). Univariate and multivariate logistic regression identified associations between independent variables and favorable results and conclusions. Of the RCTs, 50% (95/192) were funded by industry, and 37% (70/192) did not disclose any funding source. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Moreover, study design weaknesses common to published statin–drug comparisons included inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses. In multivariate analysis of the full sample, trials with adequate blinding were less likely to report results favoring the test drug, and sample size was associated with favorable conclusions when controlling for other factors. In multivariate analysis of industry-funded RCTs, funding from the test drug company was associated with results (odds ratio = 20.16 [95% confidence interval 4.37–92.98], p < 0.001) and conclusions (odds ratio = 34.55 [95% confidence interval 7.09–168.4], p < 0.001) that favor the test drug when controlling for other factors. Studies with adequate blinding were less likely to report statistically significant results favoring the test drug.
Conclusions
RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsor's product compared to the comparator drug. This bias in drug–drug comparison trials should be considered when making decisions regarding drug choice.
Lisa Bero and colleagues found published trials comparing one statin with another were more likely to report results and conclusions favoring the sponsor's product than the comparison drug.
Editors' Summary
Background.
Randomized controlled trials are generally considered to be the most reliable type of experimental study for evaluating the effectiveness of different treatments. Randomization involves the assignment of participants in the trial to different treatment groups by the play of chance. Properly done, this procedure means that the different groups are comparable at outset, reducing the chance that outside factors could be responsible for treatment effects seen in the trial. When done properly, randomization also ensures that the clinicians recruiting participants into the trial cannot know the treatment group to which a patient will end up being assigned. However, despite these advantages, a large number of factors can still result in bias creeping in. Bias comes about when the findings of research appear to differ in some systematic way from the true result. Other research studies have suggested that funding is a source of bias; studies sponsored by drug companies seem to more often favor the sponsor's drug than trials not sponsored by drug companies
Why Was This Study Done?
The researchers wanted to more precisely understand the impact of different possible sources of bias in the findings of randomized controlled trials. In particular, they wanted to study the outcomes of “head-to-head” drug comparison studies for one particular class of drugs, the statins. Drugs in this class are commonly prescribed to reduce the levels of cholesterol in blood amongst people who are at risk of heart and other types of disease. This drug class is a good example for studying the role of bias in drug–drug comparison trials, because these trials are extensively used in decision making by health-policy makers.
What Did the Researchers Do and Find?
This research study was based on searching PubMed, a biomedical literature database, with the aim of finding all randomized controlled trials of statins carried out between January 1999 and May 2005 (reference lists also were searched). Only trials which compared one statin to another statin or one statin to another type of drug were included. The researchers extracted the following information from each article: the study's source of funding, aspects of study design, the overall results, and the authors' conclusions. The results were categorized to show whether the findings were favorable to the test drug (the newer statin), inconclusive, or not favorable to the test drug. Aspects of each study's design were also categorized in relation to various features, such as how well the randomization was done (in particular, the degree to which the processes used would have prevented physicians from knowing which treatment a patient was likely to receive on enrollment); whether all participants enrolled in the trial were eventually analyzed; and whether investigators or participants knew what treatment an individual was receiving.
One hundred and ninety-two trials were included in this study, and of these, 95 declared drug company funding; 23 declared government or other nonprofit funding while 74 did not declare funding or were not funded. Trials that were properly blinded (where participants and investigators did not know what treatment an individual received) were less likely to have conclusions favoring the test drug. However, large trials were more likely to favor the test drug than smaller trials. When looking specifically at the trials funded by drug companies, the researchers found various factors that predicted whether a result or conclusion favored the test drug. These included the impact of the journal publishing the results; the size of the trial; and whether funding came from the maker of the test drug. However, properly blinded trials were less likely to produce results favoring the test drug. Even once all other factors were accounted for, the funding source for the study was still linked with results and conclusions that favored the maker of the test drug.
What Do These Findings Mean?
This study shows that the type of sponsorship available for randomized controlled trials of statins was strongly linked to the results and conclusions of those studies, even when other factors were taken into account. However, it is not clear from this study why sponsorship has such a strong link to the overall findings. There are many possible reasons why this might be. Some people have suggested that drug companies may deliberately choose lower dosages for the comparison drug when they carry out “head-to-head” trials; this tactic is likely to result in the company's product doing better in the trial. Others have suggested that trials which produce unfavorable results are not published, or that unfavorable outcomes are suppressed. Whatever the reasons for these findings, the implications are important, and suggest that the evidence base relating to statins may be substantially biased.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040184.
The James Lind Library has been created to help people understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
The International Committee of Medical Journal Editors has provided guidance regarding sponsorship, authorship, and accountability
The CONSORT statement is a research tool that provides an evidence-based approach for reporting the results of randomized controlled trials
Good Publication Practice guidelines provide standards for responsible publication of research sponsored by pharmaceutical companies
Information from Wikipedia on Statins. Wikipedia is an internet encyclopedia anyone can edit
doi:10.1371/journal.pmed.0040184
PMCID: PMC1885451  PMID: 17550302
11.  Methods of Blinding in Reports of Randomized Controlled Trials Assessing Pharmacologic Treatments: A Systematic Review 
PLoS Medicine  2006;3(10):e425.
Background
Blinding is a cornerstone of therapeutic evaluation because lack of blinding can bias treatment effect estimates. An inventory of the blinding methods would help trialists conduct high-quality clinical trials and readers appraise the quality of results of published trials. We aimed to systematically classify and describe methods to establish and maintain blinding of patients and health care providers and methods to obtain blinding of outcome assessors in randomized controlled trials of pharmacologic treatments.
Methods and Findings
We undertook a systematic review of all reports of randomized controlled trials assessing pharmacologic treatments with blinding published in 2004 in high impact-factor journals from Medline and the Cochrane Methodology Register. We used a standardized data collection form to extract data. The blinding methods were classified according to whether they primarily (1) established blinding of patients or health care providers, (2) maintained the blinding of patients or health care providers, and (3) obtained blinding of assessors of the main outcomes. We identified 819 articles, with 472 (58%) describing the method of blinding. Methods to establish blinding of patients and/or health care providers concerned mainly treatments provided in identical form, specific methods to mask some characteristics of the treatments (e.g., added flavor or opaque coverage), or use of double dummy procedures or simulation of an injection. Methods to avoid unblinding of patients and/or health care providers involved use of active placebo, centralized assessment of side effects, patients informed only in part about the potential side effects of each treatment, centralized adapted dosage, or provision of sham results of complementary investigations. The methods reported for blinding outcome assessors mainly relied on a centralized assessment of complementary investigations, clinical examination (i.e., use of video, audiotape, or photography), or adjudication of clinical events.
Conclusions
This review classifies blinding methods and provides a detailed description of methods that could help trialists overcome some barriers to blinding in clinical trials and readers interpret the quality of pharmalogic trials.
Following a systematic review of all reports of randomized controlled trials assessing pharmacologic treatments involving blinding, a classification of blinding methods is proposed.
Editors' Summary
Background.
In evidence-based medicine, good-quality randomized controlled trials are generally considered to be the most reliable source of information about the effects of different treatments, such as drugs. In a randomized trial, patients are assigned to receive one treatment or another by the play of chance. This technique helps makes sure that the two groups of patients receiving the different treatments are equivalent at the start of the trial. Proper randomization also prevents doctors from controlling or affecting which treatment patients get, which could distort the results. An additional tool that is also used to make trials more precise is “blinding.” Blinding involves taking steps to prevent patients, doctors, or other people involved in the trial (e.g., those people recording measurements) from finding out which patients got what treatment. Properly done, blinding should make sure the results of a trial are more accurate. This is because in an unblinded study, participants may respond better if they know they have received a promising new treatment (or worse if they only got placebo or an old drug); doctors may “want” a particular treatment to do better in the trial, and unthinking bias could creep into their measurements or actions; the same applies for practitioners and researchers who record patients' outcomes in the trial. However, blinding is not a simple, single step; the people carrying out the trial often have to set up a variety of different procedures that depend on the type of trial that is being done.
Why Was This Study Done?
The researchers here wanted to thoroughly examine different methods that have been used to achieve blinding in randomized trials of drug treatments, and to describe and classify them. They hoped that a better understanding of the different blinding methods would help people doing trials to design better trials in the future, and also help readers to interpret the quality of trials that had been done.
What Did the Researchers Do and Find?
This group of researchers conducted what is called a “systematic review.” They systematically searched the published medical literature to find all randomized, blinded drug trials published in 2004 in a number of different “high-impact” journals (journals whose articles are often mentioned in other articles). Then, the researchers classified information from the published trial reports. The researchers ended up with 819 trial reports, and nearly 60% of them described how blinding was done. Their classification of blinding was divided up into three main areas. First, they detailed methods used to hide which drugs are given to particular patients, such as preparing identically appearing treatments; using strong flavors to mask taste; matching the colors of pills; using saline injections and so on. Second, they described a number of methods that could be used to reduce the risk of unblinding (of doctors or patients), such as using an “active placebo” (a sugar pill that mimics some of the expected side effects of the drug treatment). Finally, they defined methods for blinded measurement of outcomes (such as using a central committee to collect data).
What Do These Findings Mean?
The researchers' classification will help people to work out how different techniques can be used to achieve, and keep, blinding in a trial. This will assist others to understand whether any particular trial was likely to have been blinded properly, and therefore work out whether the results are reliable. The researchers also suggest that, generally, blinding methods are not described in enough detail in published scientific papers, and recommend that guidelines for describing results of randomized trials be improved.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030425.
James Lind Library has been created to help patients and researchers understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
ClinicalTrials.gov, a trial registry created by the US National Institutes of Health, has an introduction to understanding clinical trials
National Electronic Library for Health introduction to controlled clinical trials
doi:10.1371/journal.pmed.0030425
PMCID: PMC1626553  PMID: 17076559
12.  Outcome measures in rheumatoid arthritis randomised trials over the last 50 years 
Trials  2013;14:324.
Background
The development and application of standardised sets of outcomes to be measured and reported in clinical trials have the potential to increase the efficiency and value of research. One of the most notable of the current outcome sets began nearly 20 years ago: the World Health Organization and International League of Associations for Rheumatology core set of outcomes for rheumatoid arthritis clinical trials, originating from the OMERACT (Outcome Measures in Rheumatology) Initiative. This study assesses the use of this core outcome set by randomised trials in rheumatology.
Methods
An observational review was carried out of 350 randomised trials for the treatment of rheumatoid arthritis identified through The Cochrane Library (up to and including September 2012 issue). Reports of these trials were evaluated to determine whether or not there were trends in the proportion of trials reporting on the full set of core outcomes over time. Researchers who conducted trials after the publication of the core set were contacted to assess their awareness of it and to collect reasons for non-inclusion of the full core set of outcomes in the study.
Results
Since the introduction of the core set of outcomes for rheumatoid arthritis, the consistency of measurement of the core set of outcomes has improved, although variation in the choice of measurement instrument remains. The majority of trialists who responded said that they would consider using the core outcome set in the design of a new trial.
Conclusions
This observational review suggests that a higher percentage of trialists conducting trials in rheumatoid arthritis are now measuring the rheumatoid arthritis core outcome set. Core outcome sets have the potential to improve the evidence base for health care, but consideration must be given to the methods for disseminating their availability amongst the relevant communities.
doi:10.1186/1745-6215-14-324
PMCID: PMC3852710  PMID: 24103529
COMET; Core outcome set; OMERACT; Outcome reporting bias; Rheumatoid arthritis
13.  Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database 
PLoS Medicine  2012;9(3):e1001189.
A comparison of data held by the U.S. Food and Drug Administration (FDA) against data from journal reports of clinical trials enables estimation of the extent of publication bias for antipsychotics.
Background
Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration (FDA), can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy.
Methods and Findings
FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection (risperidone LAI), and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four (17%) were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials (0.23, 95% confidence interval 0.07 to 0.39) was less than half that for the published trials (0.47, 95% confidence interval 0.40 to 0.54), a difference that was significant.
Conclusions
The magnitude of publication bias found for antipsychotics was less than that found previously for antidepressants, possibly because antipsychotics demonstrate superiority to placebo more consistently. Without increased access to regulatory agency data, publication bias will continue to blur distinctions between effective and ineffective drugs.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health-care professionals will ensure that they get the best available treatment. But how do clinicians know which treatment is likely to be most effective? In the past, clinicians used their own experience to make such decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the efficacy and safety of medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results from clinical trials are published in an unbiased manner. Unfortunately, “publication bias” is common. For example, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished. Moreover, published trials can be subject to outcome reporting bias—the publication may only include those trial outcomes that support the use of the new treatment rather than presenting all the available data.
Why Was This Study Done?
If only strongly positive results are published and negative results and side-effects remain unpublished, a drug will seem safer and more effective than it is in reality, which could affect clinical decision-making and patient outcomes. But how big a problem is publication bias? Here, researchers use US Food and Drug Administration (FDA) reviews as a benchmark to quantify the extent to which publication bias may be altering the apparent efficacy of second-generation antipsychotics (drugs used to treat schizophrenia and other mental illnesses that are characterized by a loss of contact with reality). In the US, all new drugs have to be approved by the FDA before they can be marketed. During this approval process, the FDA collects and keeps complete information about premarketing trials, including descriptions of their design and prespecified outcome measures and all the data collected during the trials. Thus, a comparison of the results included in the FDA reviews for a group of trials and the results that appear in the literature for the same trials can provide direct evidence about publication bias.
What Did the Researchers Do and Find?
The researchers identified 24 FDA-registered premarketing trials that investigated the use of eight second-generation antipsychotics for the treatment of schizophrenia or schizoaffective disorder. They searched the published literature for reports of these trials, and, by comparing the results of these trials according to the FDA with the results in the published articles, they examined the relationship between the study outcome (did the FDA consider it positive or negative?) and publication and looked for outcome reporting bias. Four of the 24 FDA-registered trials were unpublished. Three of these unpublished trials failed to show that the study drug was more effective than a placebo (a “dummy” pill); the fourth showed that the study drug was inferior to another drug already in use in the US. Among the 20 published trials, the five that the FDA judged not positive showed some evidence of publication bias. However, the association between trial outcome and publication status did not reach statistical significance (it might have happened by chance), and the mean effect size (a measure of drug effectiveness) derived from the published literature was only slightly higher than that derived from the FDA records. By contrast, within the FDA dataset, the mean effect size of the published trials was approximately double that of the unpublished trials.
What Do These Findings Mean?
The accuracy of these findings is limited by the small number of trials analyzed. Moreover, this study considers only the efficacy and not the safety of these drugs, it assumes that the FDA database is complete and unbiased, and its findings are not generalizable to other conditions that antipsychotics are used to treat. Nevertheless, these findings show that publication bias in the reporting of trials of second-generation antipsychotic drugs enhances the apparent efficacy of these drugs. Although the magnitude of the publication bias seen here is less than that seen in a similar study of antidepressant drugs, these findings show how selective reporting of clinical trial data undermines the integrity of the evidence base and can deprive clinicians of accurate data on which to base their prescribing decisions. Increased access to FDA reviews, suggest the researchers, is therefore essential to prevent publication bias continuing to blur distinctions between effective and ineffective drugs.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001189.
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals
Detailed information about the process by which drugs are approved is on the web site of the FDA Center for Drug Evaluation and Research; also, FDA Drug Approval Packages are available for many drugs; the FDA Transparency Initiative, which was launched in June 2009, is an agency-wide effort to improve the transparency of the FDA
FDA-approved product labeling on drugs marketed in the US can be found at the US National Library of Medicine's DailyMed web page
Wikipedia has a page on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
MedlinePlus provides links to sources of information on schizophrenia and on psychotic disorders (in English and Spanish)
Patient experiences of psychosis, including the effects of medication, are provided by the charity HealthtalkOnline
doi:10.1371/journal.pmed.1001189
PMCID: PMC3308934  PMID: 22448149
14.  Prospective registration, bias risk and outcome-reporting bias in randomised clinical trials of traditional Chinese medicine: an empirical methodological study 
BMJ Open  2013;3(7):e002968.
Background
Clinical trials on Traditional Chinese Medicine (TCM) should be registered in a publicly accessible international trial register and report on all outcomes. We systematically assessed and evaluated TCM trials in registries with their subsequent publications.
Objective
To describe the characteristics of TCM trials, estimate bias risk and outcome-reporting bias in clinical trials.
Data sources and study selection
Fifteen trial registries were searched from their inception to July 2012 to identify randomised trials on TCM including Chinese herbs, acupuncture and/or moxibustion, cupping, tuina, qigong, etc.
Data extraction
We extracted data including TCM specialty and treated disease/conditions from the registries and searched for subsequent publications in PubMed and Chinese databases. We compared information in the registries of completed trials with any publications focusing on study design, sample size, randomisation, bias risk including reporting bias from the register protocol.
Results
1096 registered randomised trials were identified evaluating TCM, of which 505 were completed studies (46.1%). The most frequent conditions were pain (13.3%), musculoskeletal (11.7%), nervous (8.7%), digestive (7.1%), circulatory (6.5%), respiratory (6.3%), mental and behavioural disorders (6.2%) and cancer (6.0%). The trial register data identified parallel, phase II/III randomised trials with sample size estimations and blinding, but limited information about randomisation (sequence generation and allocation concealment). Comparing trial registration data of 115 completed trials (22.8%) with their subsequent 136 publications, inconsistencies were identified in one or more of the following: sample size (11%), outcome assessor blinding (37.5%), primary outcomes (29%) and safety (28%) reporting.
Conclusions
Increasing numbers of clinical trials investigating a variety of TCM interventions have been registered in international trial registries. The study design of registered TCM trials has improved in estimating sample size, use of blinding and placebos. However, selective outcome reporting is widespread and similar to conventional medicine and therefore study conclusions should be interpreted with caution.
doi:10.1136/bmjopen-2013-002968
PMCID: PMC3717464  PMID: 23864210
15.  Impact of missing participant data for dichotomous outcomes on pooled effect estimates in systematic reviews: a protocol for a methodological study 
Systematic Reviews  2014;3(1):137.
Background
There is no consensus on how authors conducting meta-analysis should deal with trial participants with missing outcome data. The objectives of this study are to assess in Cochrane and non-Cochrane systematic reviews: (1) which categories of trial participants the systematic review authors consider as having missing participant data (MPD), (2) how trialists reported on participants with missing outcome data in trials, (3) whether systematic reviewer authors actually dealt with MPD in their meta-analyses of dichotomous outcomes consistently with their reported methods, and (4) the impact of different methods of dealing with MPD on pooled effect estimates in meta-analyses of dichotomous outcomes.
Methods/Design
We will conduct a methodological study of Cochrane and non-Cochrane systematic reviews. Eligible systematic reviews will include a group-level meta-analysis of a patient-important dichotomous efficacy outcome, with a statistically significant effect estimate. Teams of two reviewers will determine eligibility and subsequently extract information from each eligible systematic review in duplicate and independently, using standardized, pre-piloted forms. The teams will then use a similar process to extract information from the trials included in the meta-analyses of interest. We will assess first which categories of trial participants the systematic reviewers consider as having MPD. Second, we will assess how trialists reported on participants with missing outcome data in trials. Third, we will compare what systematic reviewers report having done, and what they actually did, in dealing with MPD in their meta-analysis. Fourth, we will conduct imputation studies to assess the effects of different methods of dealing with MPD on the pooled effect estimates of meta-analyses. We will specifically calculate for each method (1) the percentage of systematic reviews that lose statistical significance and (2) the mean change of effect estimates across systematic reviews.
Discussion
The impact of different methods of dealing with MPD on pooled effect estimates will help judge the associated risk of bias in systematic reviews. Our findings will inform recommendations regarding what assumptions for MPD should be used to test the robustness of meta-analytical results.
Electronic supplementary material
The online version of this article (doi:10.1186/2046-4053-3-137) contains supplementary material, which is available to authorized users.
doi:10.1186/2046-4053-3-137
PMCID: PMC4285551  PMID: 25423894
Missing participant data; Imputation; Risk of bias; Trials; Systematic reviews; Meta-analysis
16.  Spectacle correction versus no spectacles for prevention of strabismus in hyperopic children 
Background
Hyperopia (far-sightedness) in infancy requires accommodative effort to bring images into focus. Prolonged accommodative effort has been associated with an increased risk of strabismus (eye misalignment). Strabismus makes it difficult for the eyes to work together and may result in symptoms of asthenopia (eye strain) and intermittent diplopia (double vision), and makes near work tasks difficult to complete. Untreated strabismus may result in the development of amblyopia (lazy eye). The prescription of spectacles to correct hyperopic refractive error is believed to prevent the development of strabismus.
Objectives
To assess the effectiveness of prescription spectacles compared with no intervention for the prevention of strabismus in infants and children with hyperopia.
Search methods
We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 4), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to April 2014), EMBASE (January 1980 to April 2014), PubMed (1966 to April 2014), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 3 April 2014. We also searched the Science Citation Index database in September 2013.
Selection criteria
We included randomized controlled trials and quasi-randomized trials investigating the assignment to spectacle intervention or no treatment for children with hyperopia. The definition of hyperopia remains subjective, but we required it to be at least greater than +2.00 diopters (D) of hyperopia.
Data collection and analysis
Two review authors independently extracted data using the standard methodologic procedures expected by The Cochrane Collaboration. One review author entered data into Review Manager and a second review author verified the data entered. The two review authors resolved discrepancies at all stages of the review process.
Main results
We identified three randomized controlled trials (855 children enrolled) in this review. These trials were all conducted in the UK with follow-up periods ranging from one to 3.5 years. We judged the included studies to be at high risk of bias, due to use of quasi-random methods for assigning children to treatment, no masking of outcomes assessors, and high proportions of drop-outs. None of the three trials accounted for missing data and analyses were limited to the available-case data (674 (79%) of 855 children enrolled for the primary outcome). These factors impair our ability to assess the effectiveness of treatment.
Analyses incorporating the three trials we identified in this review (674 children) suggested the effect of spectacle correction initiated prior to the age of one year in hyperopic children between three and four years of age is uncertain with respect to preventing strabismus (risk ratio (RR) 0.71; 95% confidence interval (CI) 0.44 to 1.15). Based on a meta-analysis of three trials (664 children), the risk of having visual acuity worse than 20/30 at three years of age was also uncertain for children with spectacles compared with those without spectacle correction irrespective of compliance (RR 0.87; 95% CI 0.60 to 1.26).
Emmetropization was reported in two trials: one trial suggested that spectacles impede emmetropization, and the second trial reported no difference in the rate of refractive error change.
Authors’ conclusions
Although children who were allocated to the spectacle group were less likely to develop strabismus and less likely to have visual acuity worse than 20/30 children allocated to no spectacles, these effects may have been chance findings, or due to bias. Due to the high risk of bias and poor reporting of included trials, the true effect of spectacle correction for hyperopia on strabismus is still uncertain.
doi:10.1002/14651858.CD007738.pub2
PMCID: PMC4259577  PMID: 25133974
17.  Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research 
Background
The reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review.
Methods
We compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications.
Results
We identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th–90th percentile range 5%–67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%–100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5–5.0). Primary outcomes differed between protocols and publications for 40% of the trials.
Interpretation
Selective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.
doi:10.1503/cmaj.1041086
PMCID: PMC517858  PMID: 15451835
18.  Association of trial registration with the results and conclusions of published trials of new oncology drugs 
Trials  2009;10:116.
Background
Registration of clinical trials has been introduced largely to reduce bias toward statistically significant results in the trial literature. Doubts remain about whether advance registration alone is an adequate measure to reduce selective publication, selective outcome reporting, and biased design. One of the first areas of medicine in which registration was widely adopted was oncology, although the bulk of registered oncology trials remain unpublished. The net influence of registration on the literature remains untested. This study compares the prevalence of favorable results and conclusions among published reports of registered and unregistered randomized controlled trials of new oncology drugs.
Methods
We conducted a cross-sectional study of published original research articles reporting clinical trials evaluating the efficacy of drugs newly approved for antimalignancy indications by the United States Food and Drug Administration (FDA) from 2000 through 2005. Drugs receiving first-time approval for indications in oncology were identified using the FDA web site and Thomson Centerwatch. Relevant trial reports were identified using PubMed and the Cochrane Library. Evidence of advance trial registration was obtained by a search of clinicaltrials.gov, WHO, ISRCTN, NCI-PDQ trial databases and corporate trial registries, as well as articles themselves. Data on blinding, results for primary outcomes, and author conclusions were extracted independently by two coders. Univariate and multivariate logistic regression identified associations between favorable results and conclusions and independent variables including advance registration, study design characteristics, and industry sponsorship.
Results
Of 137 original research reports from 115 distinct randomized trials assessing 25 newly approved drugs for treating cancer, the 54 publications describing data from trials registered prior to publication were as likely to report statistically significant efficacy results and reach conclusions favoring the test drug (for results, OR = 1.77; 95% CI = 0.87 to 3.61) as reports of trials not registered in advance. In multivariate analysis, reports of prior registered trials were again as likely to favor the test drug (OR = 1.29; 95% CI = 0.54 to 3.08); large sample sizes and surrogate outcome measures were statistically significant predictors of favorable efficacy results at p < 0.05. Subgroup analysis of the main reports from each trial (n = 115) similarly indicated that registered trials were as likely to report results favoring the test drug as trials not registered in advance (OR = 1.11; 95% CI = 0.44 to 2.80), and also that large trials and trials with nonstringent blinding were significantly more likely to report results favoring the test drug.
Conclusions
Trial registration alone, without a requirement for full reporting of research results, does not appear to reduce a bias toward results and conclusions favoring new drugs in the clinical trials literature. Our findings support the inclusion of full results reporting in trial registers, as well as protocols to allow assessment of whether results have been completely reported.
doi:10.1186/1745-6215-10-116
PMCID: PMC2811705  PMID: 20015404
19.  Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data 
PLoS Medicine  2013;10(10):e1001526.
Beate Wieseler and colleagues compare the completeness of reporting of patient-relevant clinical trial outcomes between clinical study reports and publicly available data.
Please see later in the article for the Editors' Summary
Background
Access to unpublished clinical study reports (CSRs) is currently being discussed as a means to allow unbiased evaluation of clinical research. The Institute for Quality and Efficiency in Health Care (IQWiG) routinely requests CSRs from manufacturers for its drug assessments.
Our objective was to determine the information gain from CSRs compared to publicly available sources (journal publications and registry reports) for patient-relevant outcomes included in IQWiG health technology assessments (HTAs) of drugs.
Methods and Findings
We used a sample of 101 trials with full CSRs received for 16 HTAs of drugs completed by IQWiG between 15 January 2006 and 14 February 2011, and analyzed the CSRs and the publicly available sources of these trials. For each document type we assessed the completeness of information on all patient-relevant outcomes included in the HTAs (benefit outcomes, e.g., mortality, symptoms, and health-related quality of life; harm outcomes, e.g., adverse events). We dichotomized the outcomes as “completely reported” or “incompletely reported.” For each document type, we calculated the proportion of outcomes with complete information per outcome category and overall.
We analyzed 101 trials with CSRs; 86 had at least one publicly available source, 65 at least one journal publication, and 50 a registry report. The trials included 1,080 patient-relevant outcomes. The CSRs provided complete information on a considerably higher proportion of outcomes (86%) than the combined publicly available sources (39%). With the exception of health-related quality of life (57%), CSRs provided complete information on 78% to 100% of the various benefit outcomes (combined publicly available sources: 20% to 53%). CSRs also provided considerably more information on harms. The differences in completeness of information for patient-relevant outcomes between CSRs and journal publications or registry reports (or a combination of both) were statistically significant for all types of outcomes.
The main limitation of our study is that our sample is not representative because only CSRs provided voluntarily by pharmaceutical companies upon request could be assessed. In addition, the sample covered only a limited number of therapeutic areas and was restricted to randomized controlled trials investigating drugs.
Conclusions
In contrast to CSRs, publicly available sources provide insufficient information on patient-relevant outcomes of clinical trials. CSRs should therefore be made publicly available.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health care professionals will ensure that they get the best available treatment. In the past, clinicians used their own experience to make decisions about which treatments to offer their patients, but nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical trials, studies that investigate the benefits and harms of drugs and other medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results of clinical research are available for evaluation. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Both types of bias pose a substantial threat to informed medical decision-making.
Why Was This Study Done?
Recent initiatives, such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a precondition for publication in medical journals, aim to prevent these biases but are imperfect. Another way to facilitate the unbiased evaluation of clinical research might be to increase access to clinical study reports (CSRs)—detailed but generally unpublished accounts of clinical trials. Notably, information from CSRs was recently used to challenge conclusions based on published evidence about the efficacy and safety of the antiviral drug oseltamivir and the antidepressant reboxetine. In this study, the researchers compare the information available in CSRs and in publicly available sources (journal publications and registry reports) for the patient-relevant outcomes included in 16 health technology assessments (HTAs; analyses of the medical implications of the use of specific medical technologies) for drugs; the HTAs were prepared by the Institute for Quality and Efficiency in Health Care (IQWiG), Germany's main HTA agency.
What Did the Researchers Do and Find?
The researchers searched for published journal articles and registry reports for each of 101 trials for which the IQWiG had requested and received full CSRs from drug manufacturers during HTA preparation. They then assessed the completeness of information on the patient-relevant benefit and harm outcomes (for example symptom relief and adverse effects, respectively) included in each document type. Eighty-six of the included trials had at least one publicly available data source; the results of 15% of the trials were not available in either journals or registry reports. Overall, the CSRs provided complete information on 86% of the patient-related outcomes, whereas the combined publicly available sources provided complete information on only 39% of the outcomes. For individual outcomes, the CSRs provided complete information on 78%–100% of the benefit outcomes, with the exception of health-related quality of life (57%); combined publicly available sources provided complete information on 20%–53% of these outcomes. The CSRs also provided more information on patient-relevant harm outcomes than the publicly available sources.
What Do These Findings Mean?
These findings show that, for the clinical trials considered here, publicly available sources provide much less information on patient-relevant outcomes than CSRs. The generalizability of these findings may be limited, however, because the trials included in this study are not representative of all trials. Specifically, only CSRs that were voluntarily provided by drug companies were assessed, a limited number of therapeutic areas were covered by the trials, and the trials investigated only drugs. Nevertheless, these findings suggest that access to CSRs is important for the unbiased evaluation of clinical trials and for informed decision-making in health care. Notably, in June 2013, the European Medicines Agency released a draft policy calling for the proactive publication of complete clinical trial data (possibly including CSRs). In addition, the European Union and the European Commission are considering legal measures to improve the transparency of clinical trial data. Both these initiatives will probably only apply to drugs that are approved after January 2014, however, and not to drugs already in use. The researchers therefore call for CSRs to be made publicly available for both past and future trials, a recommendation also supported by the AllTrials initiative, which is campaigning for all clinical trials to be registered and fully reported.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001526.
Wikipedia has pages on evidence-based medicine, publication bias, and health technology assessment (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The ClinicalTrials.gov website is a searchable register of federally and privately supported clinical trials in the US; it provides information about all aspects of clinical trials
The European Medicines Agency (EMA) provides information about all aspects of the scientific evaluation and approval of new medicines in the European Union, and guidance on the preparation of clinical study reports; its draft policy on the release of data from clinical trials is available
Information about IQWiG is available (in English and German); Informed Health Online is a website provided by IQWiG that provides objective, independent, and evidence-based information for patients (also in English and German)
doi:10.1371/journal.pmed.1001526
PMCID: PMC3793003  PMID: 24115912
20.  Multiplicity of data in trial reports and the reliability of meta-analyses: empirical study 
Objectives To examine the extent of multiplicity of data in trial reports and to assess the impact of multiplicity on meta-analysis results.
Design Empirical study on a cohort of Cochrane systematic reviews.
Data sources All Cochrane systematic reviews published from issue 3 in 2006 to issue 2 in 2007 that presented a result as a standardised mean difference (SMD). We retrieved trial reports contributing to the first SMD result in each review, and downloaded review protocols. We used these SMDs to identify a specific outcome for each meta-analysis from its protocol.
Review methods Reviews were eligible if SMD results were based on two to ten randomised trials and if protocols described the outcome. We excluded reviews if they only presented results of subgroup analyses. Based on review protocols and index outcomes, two observers independently extracted the data necessary to calculate SMDs from the original trial reports for any intervention group, time point, or outcome measure compatible with the protocol. From the extracted data, we used Monte Carlo simulations to calculate all possible SMDs for every meta-analysis.
Results We identified 19 eligible meta-analyses (including 83 trials). Published review protocols often lacked information about which data to choose. Twenty-four (29%) trials reported data for multiple intervention groups, 30 (36%) reported data for multiple time points, and 29 (35%) reported the index outcome measured on multiple scales. In 18 meta-analyses, we found multiplicity of data in at least one trial report; the median difference between the smallest and largest SMD results within a meta-analysis was 0.40 standard deviation units (range 0.04 to 0.91).
Conclusions Multiplicity of data can affect the findings of systematic reviews and meta-analyses. To reduce the risk of bias, reviews and meta-analyses should comply with prespecified protocols that clearly identify time points, intervention groups, and scales of interest.
doi:10.1136/bmj.d4829
PMCID: PMC3171064  PMID: 21878462
21.  Presentation of continuous outcomes in randomised trials: an observational study 
Objective To characterise the percentage of available outcome data being presented in reports of randomised clinical trials with continuous outcome measures, thereby determining the potential for incomplete reporting bias.
Design Descriptive cross sectional study.
Data sources A random sample of 200 randomised trials from issues of 20 medical journals in a variety of specialties during 2007–09.
Main outcome measures For each paper’s best reported primary outcome, we calculated the fraction of data reported using explicit scoring rules. For example, a two arm trial with 100 patients per limb that reported 2 sample sizes, 2 means, and 2 standard deviations reported 6/200 data elements (1.5%), but if that paper included a scatterplot with 200 points it would score 200/200 (100%). We also assessed compliance with 2001 CONSORT items about the reporting of results.
Results The median percentage of data reported for the best reported continuous outcome was 9% (interquartile range 3–26%) but only 3.5% (3–7%) when we adjusted studies to 100 patients per arm to control for varying study size; 17% of articles showed 100% of the data. Tables were the predominant means of presenting the most data (59% of articles), but papers that used figures reported a higher proportion of data. There was substantial heterogeneity among journals with respect to our primary outcome and CONSORT compliance.
Limitations We studied continuous outcomes of randomised trials in higher impact journals. Results may not apply to categorical outcomes, other study designs, or other journals.
Conclusions Trialists present only a small fraction of available data. This paucity of data may increase the potential for incomplete reporting bias, a failure to present all relevant information about a study’s findings.
doi:10.1136/bmj.e8486
PMCID: PMC3668620  PMID: 23249670
22.  General health checks in adults for reducing morbidity and mortality from disease: Cochrane systematic review and meta-analysis 
Objectives To quantify the benefits and harms of general health checks in adults with an emphasis on patient-relevant outcomes such as morbidity and mortality rather than on surrogate outcomes.
Design Cochrane systematic review and meta-analysis of randomised trials. For mortality, we analysed the results with random effects meta-analysis, and for other outcomes we did a qualitative synthesis as meta-analysis was not feasible.
Data sources Medline, EMBASE, Healthstar, Cochrane Library, Cochrane Central Register of Controlled Trials, CINAHL, EPOC register, ClinicalTrials.gov, and WHO ICTRP, supplemented by manual searches of reference lists of included studies, citation tracking (Web of Knowledge), and contacts with trialists.
Selection criteria Randomised trials comparing health checks with no health checks in adult populations unselected for disease or risk factors. Health checks defined as screening general populations for more than one disease or risk factor in more than one organ system. We did not include geriatric trials.
Data extraction Two observers independently assessed eligibility, extracted data, and assessed the risk of bias. We contacted authors for additional outcomes or trial details when necessary.
Results We identified 16 trials, 14 of which had available outcome data (182 880 participants). Nine trials provided data on total mortality (11 940 deaths), and they gave a risk ratio of 0.99 (95% confidence interval 0.95 to 1.03). Eight trials provided data on cardiovascular mortality (4567 deaths), risk ratio 1.03 (0.91 to 1.17), and eight on cancer mortality (3663 deaths), risk ratio 1.01 (0.92 to 1.12). Subgroup and sensitivity analyses did not alter these findings. We did not find beneficial effects of general health checks on morbidity, hospitalisation, disability, worry, additional physician visits, or absence from work, but not all trials reported on these outcomes. One trial found that health checks led to a 20% increase in the total number of new diagnoses per participant over six years compared with the control group and an increased number of people with self reported chronic conditions, and one trial found an increased prevalence of hypertension and hypercholesterolaemia. Two out of four trials found an increased use of antihypertensives. Two out of four trials found small beneficial effects on self reported health, which could be due to bias.
Conclusions General health checks did not reduce morbidity or mortality, neither overall nor for cardiovascular or cancer causes, although they increased the number of new diagnoses. Important harmful outcomes were often not studied or reported.
Systematic review registration Cochrane Library, doi:10.1002/14651858.CD009009.
doi:10.1136/bmj.e7191
PMCID: PMC3502745  PMID: 23169868
23.  Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis 
PLoS Medicine  2009;6(9):e1000144.
Joseph Ross and colleagues examine publication rates of clinical trials and find low rates of publication even following registration in Clinicaltrials.gov.
Background
ClinicalTrials.gov is a publicly accessible, Internet-based registry of clinical trials managed by the US National Library of Medicine that has the potential to address selective trial publication. Our objectives were to examine completeness of registration within ClinicalTrials.gov and to determine the extent and correlates of selective publication.
Methods and Findings
We examined reporting of registration information among a cross-section of trials that had been registered at ClinicalTrials.gov after December 31, 1999 and updated as having been completed by June 8, 2007, excluding phase I trials. We then determined publication status among a random 10% subsample by searching MEDLINE using a systematic protocol, after excluding trials completed after December 31, 2005 to allow at least 2 y for publication following completion. Among the full sample of completed trials (n = 7,515), nearly 100% reported all data elements mandated by ClinicalTrials.gov, such as intervention and sponsorship. Optional data element reporting varied, with 53% reporting trial end date, 66% reporting primary outcome, and 87% reporting trial start date. Among the 10% subsample, less than half (311 of 677, 46%) of trials were published, among which 96 (31%) provided a citation within ClinicalTrials.gov of a publication describing trial results. Trials primarily sponsored by industry (40%, 144 of 357) were less likely to be published when compared with nonindustry/nongovernment sponsored trials (56%, 110 of 198; p<0.001), but there was no significant difference when compared with government sponsored trials (47%, 57 of 122; p = 0.22). Among trials that reported an end date, 75 of 123 (61%) completed prior to 2004, 50 of 96 (52%) completed during 2004, and 62 of 149 (42%) completed during 2005 were published (p = 0.006).
Conclusions
Reporting of optional data elements varied and publication rates among completed trials registered within ClinicalTrials.gov were low. Without greater attention to reporting of all data elements, the potential for ClinicalTrials.gov to address selective publication of clinical trials will be limited.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that whenever they are ill, health care professionals will make sure they get the best available treatment. But how do clinicians know which treatment is most appropriate? In the past, clinicians used their own experience to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of the results of clinical trials, studies that investigate the efficacy and safety of medical interventions in people. However, evidence-based medicine can only be effective if all the results from clinical trials are published promptly in medical journals. Unfortunately, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished or only appear in the public domain many years after the drug has been approved for clinical use by the US Food and Drug Administration (FDA) and other governmental bodies.
Why Was This Study Done?
The extent of this “selective” publication, which can impair evidence-based clinical practice, remains unclear but is thought to be substantial. In this study, the researchers investigate the problem of selective publication by systematically examining the extent of publication of the results of trials registered in ClinicalTrials.gov, a Web-based registry of US and international clinical trials. ClinicalTrials.gov was established in 2000 by the US National Library of Medicine in response to the 1997 FDA Modernization Act. This act required preregistration of all trials of new drugs to provide the public with information about trials in which they might be able to participate. Mandatory data elements for registration in ClinicalTrials.gov initially included the trial's title, the condition studied in the trial, the trial design, and the intervention studied. In September 2007, the FDA Amendments Act expanded the mandatory requirements for registration in ClinicalTrials.gov by making it necessary, for example, to report the trial start date and to report primary and secondary outcomes (the effect of the intervention on predefined clinical measurements) in the registry within 2 years of trial completion.
What Did the Researchers Do and Find?
The researchers identified 7,515 trials that were registered within ClinicalTrials.gov after December 31, 1999 (excluding phase I, safety trials), and whose record indicated trial completion by June 8, 2007. Most of these trials reported all the mandatory data elements that were required by ClinicalTrials.gov before the FDA Amendments Act but reporting of optional data elements was less complete. For example, only two-thirds of the trials reported their primary outcome. Next, the researchers randomly selected 10% of the trials and, after excluding trials whose completion date was after December 31, 2005 (to allow at least two years for publication), determined the publication status of this subsample by systematically searching MEDLINE (an online database of articles published in selected medical and scientific journals). Fewer than half of the trials in the subsample had been published, and the citation for only a third of these publications had been entered into ClinicalTrials.gov. Only 40% of industry-sponsored trials had been published compared to 56% of nonindustry/nongovernment-sponsored trials, a difference that is unlikely to have occurred by chance. Finally, 61% of trials with a completion date before 2004 had been published, but only 42% of trials completed during 2005 had been published.
What Do These Findings Mean?
These findings indicate that, over the period studied, critical trial information was not included in the ClinicalTrials.gov registry. The FDA Amendments Act should remedy some of these shortcomings but only if the accuracy and completeness of the information in ClinicalTrials.gov is carefully monitored. These findings also reveal that registration in ClinicalTrials.gov does not guarantee that trial results will appear in a timely manner in the scientific literature. However, they do not address the reasons for selective publication (which may be, in part, because it is harder to publish negative results than positive results), and they are potentially limited by the methods used to discover whether trial results had been published. Nevertheless, these findings suggest that the FDA, trial sponsors, and the scientific community all need to make a firm commitment to minimize the selective publication of trial results to ensure that patients and clinicians have access to the information they need to make fully informed treatment decisions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000144.
PLoS Medicine recently published two related articles on selected publication by Ida Sim and colleagues and by Lisa Bero and colleagues and an editorial discussing the FDA Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The US Food and Drug Administration provides further information about drug approval in the US for consumers and health care professionals
doi:10.1371/journal.pmed.1000144
PMCID: PMC2728480  PMID: 19901971
24.  Selection in Reported Epidemiological Risks: An Empirical Assessment 
PLoS Medicine  2007;4(3):e79.
Background
Epidemiological studies may be subject to selective reporting, but empirical evidence thereof is limited. We empirically evaluated the extent of selection of significant results and large effect sizes in a large sample of recent articles.
Methods and Findings
We evaluated 389 articles of epidemiological studies that reported, in their respective abstracts, at least one relative risk for a continuous risk factor in contrasts based on median, tertile, quartile, or quintile categorizations. We examined the proportion and correlates of reporting statistically significant and nonsignificant results in the abstract and whether the magnitude of the relative risks presented (coined to be consistently ≥1.00) differs depending on the type of contrast used for the risk factor. In 342 articles (87.9%), ≥1 statistically significant relative risk was reported in the abstract, while only 169 articles (43.4%) reported ≥1 statistically nonsignificant relative risk in the abstract. Reporting of statistically significant results was more common with structured abstracts, and was less common in US-based studies and in cancer outcomes. Among 50 randomly selected articles in which the full text was examined, a median of nine (interquartile range 5–16) statistically significant and six (interquartile range 3–16) statistically nonsignificant relative risks were presented (p = 0.25). Paradoxically, the smallest presented relative risks were based on the contrasts of extreme quintiles; on average, the relative risk magnitude was 1.41-, 1.42-, and 1.36-fold larger in contrasts of extreme quartiles, extreme tertiles, and above-versus-below median values, respectively (p < 0.001).
Conclusions
Published epidemiological investigations almost universally highlight significant associations between risk factors and outcomes. For continuous risk factors, investigators selectively present contrasts between more extreme groups, when relative risks are inherently lower.
An evaluation of published articles reporting epidemiological studies found that they almost universally highlight significant associations between risk factors and outcomes.
Editors' Summary
Background.
Medical and scientific researchers use statistical tests to try to work out whether their observations—for example, seeing a difference in some characteristic between two groups of people—might have occurred as a result of chance alone. Statistical tests cannot determine this for sure, rather they can only give a probability that the observations would have arisen by chance. When researchers have many different hypotheses, and carry out many statistical tests on the same set of data, they run the risk of concluding that there are real differences where in fact there are none. At the same time, it has long been known that scientific and medical researchers tend to pick out the findings on which to report in their papers. Findings that are more interesting, impressive, or statistically significant are more likely to be published. This is termed “publication bias” or “selective reporting bias.” Therefore, some people are concerned that the published scientific literature might contain many false-positive findings, i.e., findings that are not true but are simply the result of chance variation in the data. This would have a serious impact on the accuracy of the published scientific literature and would tend to overestimate the strength and direction of relationships being studied.
Why Was This Study Done?
Selective reporting bias has already been studied in detail in the area of randomized trials (studies where participants are randomly allocated to receive an intervention, e.g., a new drug, versus an alternative intervention or “comparator,” in order to understand the benefits or safety of the new intervention). These studies have shown that very many of the findings of trials are never published, and that statistically significant findings are more likely to be included in published papers than nonsignificant findings. However, much medical research is carried out that does not use randomized trial methods, either because that method is not useful to answer the question at hand or is unethical. Epidemiological research is often concerned with looking at links between risk factors and the development of disease, and this type of research would generally use observation rather than experiment to uncover connections. The researchers here were concerned that selective reporting bias might be just as much of a problem in epidemiological research as in randomized trials research, and wanted to study this specifically.
What Did the Researchers Do and Find?
In this investigation, searches were carried out of PubMed, a database of biomedical research studies, to extract epidemiological studies that were published between January 2004 and October 2005. The researchers wanted to specifically look at studies reporting the effect of continuous risk factors and their effect on health or disease outcomes (a continuous risk factor is something like age or glucose concentration in the blood, is a number, and can have any value on a sliding scale). Three hundred and eighty-nine original research studies were found, and the researchers pulled out from the abstracts and full text of these papers the relative risks that were reported along with the results of statistical tests for them. (Relative risk is the chance of getting an outcome, say disease, in one group as compared to another group.) The researchers found that nearly 90% of these studies had one or more statistically significant risks reported in the abstract, but only 43% reported one or more risks that were not statistically significant. When looking at all of the findings reported anywhere in the full text for 50 of these studies, the researchers saw that papers overall reported more statistically significant risks than nonsignificant risks. Finally, it seemed that in the set of papers studied here, the way in which statistical analyses were done produced a bias towards more extreme findings: for datasets showing small relative risks, papers were more likely to report a comparison between extreme subsets of the data so as to report larger relative risks.
What Do These Findings Mean?
These findings suggest that there is a tendency among epidemiology researchers to highlight statistically significant findings and to avoid highlighting nonsignificant findings in their research papers. This behavior may be a problem, because many of these significant findings could in future turn out to be “false positives.” At present, registers exist for researchers to describe ongoing clinical trials, and to set out the outcomes that they plan to analyze for those trials. These registers will go some way towards addressing some of the problems described here, but only for clinical trials research. Registers do not yet exist for epidemiological studies, and therefore it is important that researchers and readers are aware of and cautious about the problem of selective reporting in epidemiological research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040079.
Wikipedia entry on publication bias (note: Wikipedia is an internet encyclopedia that anyone can edit)
The International Committee of Medical Journal Editors gives guidelines for submitting manuscripts to its member journals, and includes comments about registration of ongoing studies and the obligation to publish negative studies
ClinicalTrials.gov and the ISRCTN register are two registries of ongoing clinical trials
doi:10.1371/journal.pmed.0040079
PMCID: PMC1808481  PMID: 17341129
25.  An empirical investigation of the potential impact of selective inclusion of results in systematic reviews of interventions: study protocol 
Systematic Reviews  2013;2:21.
Background
Systematic reviewers may encounter a multiplicity of outcome data in the reports of randomised controlled trials included in the review (for example, multiple measurement instruments measuring the same outcome, multiple time points, and final and change from baseline values). The primary objectives of this study are to investigate in a cohort of systematic reviews of randomised controlled trials of interventions for rheumatoid arthritis, osteoarthritis, depressive disorders and anxiety disorders: (i) how often there is multiplicity of outcome data in trial reports; (ii) the association between selection of trial outcome data included in a meta-analysis and the magnitude and statistical significance of the trial result, and; (iii) the impact of the selection of outcome data on meta-analytic results.
Methods/Design
Forty systematic reviews (20 Cochrane, 20 non-Cochrane) of RCTs published from January 2010 to January 2012 and indexed in the Cochrane Database of Systematic Reviews (CDSR) or PubMed will be randomly sampled. The first meta-analysis of a continuous outcome within each review will be included. From each review protocol (where available) and published review we will extract information regarding which types of outcome data were eligible for inclusion in the meta-analysis (for example, measurement instruments, time points, analyses). From the trial reports we will extract all outcome data that are compatible with the meta-analysis outcome as it is defined in the review and with the outcome data eligibility criteria and hierarchies in the review protocol. The association between selection of trial outcome data included in a meta-analysis and the magnitude and statistical significance of the trial result will be investigated. We will also investigate the impact of the selected trial result on the magnitude of the resulting meta-analytic effect estimates.
Discussion
The strengths of this empirical study are that our objectives and methods are pre-specified and transparent. The results may inform methods guidance for systematic review conduct and reporting, particularly for dealing with multiplicity of randomised controlled trial outcome data.
doi:10.1186/2046-4053-2-21
PMCID: PMC3626625  PMID: 23575367
Systematic review; Randomised controlled trials; Reporting; Bias; Research methodology

Results 1-25 (1400253)