PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (939559)

Clipboard (0)
None

Related Articles

1.  Findings from the SASA! Study: a cluster randomized controlled trial to assess the impact of a community mobilization intervention to prevent violence against women and reduce HIV risk in Kampala, Uganda 
BMC Medicine  2014;12(1):122.
Background
Intimate partner violence (IPV) and HIV are important and interconnected public health concerns. While it is recognized that they share common social drivers, there is limited evidence surrounding the potential of community interventions to reduce violence and HIV risk at the community level. The SASA! study assessed the community-level impact of SASA!, a community mobilization intervention to prevent violence and reduce HIV-risk behaviors.
Methods
From 2007 to 2012 a pair-matched cluster randomized controlled trial (CRT) was conducted in eight communities (four intervention and four control) in Kampala, Uganda. Cross-sectional surveys of a random sample of community members, 18- to 49-years old, were undertaken at baseline (n = 1,583) and four years post intervention implementation (n = 2,532). Six violence and HIV-related primary outcomes were defined a priori. An adjusted cluster-level intention-to-treat analysis compared outcomes in intervention and control communities at follow-up.
Results
The intervention was associated with significantly lower social acceptance of IPV among women (adjusted risk ratio 0.54, 95% confidence interval (CI) 0.38 to 0.79) and lower acceptance among men (0.13, 95% CI 0.01 to 1.15); significantly greater acceptance that a woman can refuse sex among women (1.28, 95% CI 1.07 to 1.52) and men (1.31, 95% CI 1.00 to 1.70); 52% lower past year experience of physical IPV among women (0.48, 95% CI 0.16 to 1.39); and lower levels of past year experience of sexual IPV (0.76, 95% CI 0.33 to 1.72). Women experiencing violence in intervention communities were more likely to receive supportive community responses. Reported past year sexual concurrency by men was significantly lower in intervention compared to control communities (0.57, 95% CI 0.36 to 0.91).
Conclusions
This is the first CRT in sub-Saharan Africa to assess the community impact of a mobilization program on the social acceptability of IPV, the past year prevalence of IPV and levels of sexual concurrency. SASA! achieved important community impacts, and is now being delivered in control communities and replicated in 15 countries.
Trial registration
ClinicalTrials.gov #NCT00790959,
Study protocol available at http://www.trialsjournal.com/content/13/1/96
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-014-0122-5) contains supplementary material, which is available to authorized users.
doi:10.1186/s12916-014-0122-5
PMCID: PMC4243194  PMID: 25248996
Violence prevention; Impact evaluation; Community mobilization; Intimate partner violence; Uganda; HIV; Gender based violence; East Africa
2.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
3.  Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database 
PLoS Medicine  2012;9(3):e1001189.
A comparison of data held by the U.S. Food and Drug Administration (FDA) against data from journal reports of clinical trials enables estimation of the extent of publication bias for antipsychotics.
Background
Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration (FDA), can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy.
Methods and Findings
FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection (risperidone LAI), and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four (17%) were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials (0.23, 95% confidence interval 0.07 to 0.39) was less than half that for the published trials (0.47, 95% confidence interval 0.40 to 0.54), a difference that was significant.
Conclusions
The magnitude of publication bias found for antipsychotics was less than that found previously for antidepressants, possibly because antipsychotics demonstrate superiority to placebo more consistently. Without increased access to regulatory agency data, publication bias will continue to blur distinctions between effective and ineffective drugs.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health-care professionals will ensure that they get the best available treatment. But how do clinicians know which treatment is likely to be most effective? In the past, clinicians used their own experience to make such decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the efficacy and safety of medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results from clinical trials are published in an unbiased manner. Unfortunately, “publication bias” is common. For example, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished. Moreover, published trials can be subject to outcome reporting bias—the publication may only include those trial outcomes that support the use of the new treatment rather than presenting all the available data.
Why Was This Study Done?
If only strongly positive results are published and negative results and side-effects remain unpublished, a drug will seem safer and more effective than it is in reality, which could affect clinical decision-making and patient outcomes. But how big a problem is publication bias? Here, researchers use US Food and Drug Administration (FDA) reviews as a benchmark to quantify the extent to which publication bias may be altering the apparent efficacy of second-generation antipsychotics (drugs used to treat schizophrenia and other mental illnesses that are characterized by a loss of contact with reality). In the US, all new drugs have to be approved by the FDA before they can be marketed. During this approval process, the FDA collects and keeps complete information about premarketing trials, including descriptions of their design and prespecified outcome measures and all the data collected during the trials. Thus, a comparison of the results included in the FDA reviews for a group of trials and the results that appear in the literature for the same trials can provide direct evidence about publication bias.
What Did the Researchers Do and Find?
The researchers identified 24 FDA-registered premarketing trials that investigated the use of eight second-generation antipsychotics for the treatment of schizophrenia or schizoaffective disorder. They searched the published literature for reports of these trials, and, by comparing the results of these trials according to the FDA with the results in the published articles, they examined the relationship between the study outcome (did the FDA consider it positive or negative?) and publication and looked for outcome reporting bias. Four of the 24 FDA-registered trials were unpublished. Three of these unpublished trials failed to show that the study drug was more effective than a placebo (a “dummy” pill); the fourth showed that the study drug was inferior to another drug already in use in the US. Among the 20 published trials, the five that the FDA judged not positive showed some evidence of publication bias. However, the association between trial outcome and publication status did not reach statistical significance (it might have happened by chance), and the mean effect size (a measure of drug effectiveness) derived from the published literature was only slightly higher than that derived from the FDA records. By contrast, within the FDA dataset, the mean effect size of the published trials was approximately double that of the unpublished trials.
What Do These Findings Mean?
The accuracy of these findings is limited by the small number of trials analyzed. Moreover, this study considers only the efficacy and not the safety of these drugs, it assumes that the FDA database is complete and unbiased, and its findings are not generalizable to other conditions that antipsychotics are used to treat. Nevertheless, these findings show that publication bias in the reporting of trials of second-generation antipsychotic drugs enhances the apparent efficacy of these drugs. Although the magnitude of the publication bias seen here is less than that seen in a similar study of antidepressant drugs, these findings show how selective reporting of clinical trial data undermines the integrity of the evidence base and can deprive clinicians of accurate data on which to base their prescribing decisions. Increased access to FDA reviews, suggest the researchers, is therefore essential to prevent publication bias continuing to blur distinctions between effective and ineffective drugs.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001189.
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals
Detailed information about the process by which drugs are approved is on the web site of the FDA Center for Drug Evaluation and Research; also, FDA Drug Approval Packages are available for many drugs; the FDA Transparency Initiative, which was launched in June 2009, is an agency-wide effort to improve the transparency of the FDA
FDA-approved product labeling on drugs marketed in the US can be found at the US National Library of Medicine's DailyMed web page
Wikipedia has a page on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
MedlinePlus provides links to sources of information on schizophrenia and on psychotic disorders (in English and Spanish)
Patient experiences of psychosis, including the effects of medication, are provided by the charity HealthtalkOnline
doi:10.1371/journal.pmed.1001189
PMCID: PMC3308934  PMID: 22448149
4.  Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation 
PLoS Medicine  2008;5(11):e217.
Background
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.
Methods and Findings
This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).
Conclusions
Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Lisa Bero and colleagues review the publication status of all efficacy trials carried out in support of new drug approvals from 2001 and 2002, and find that a quarter of trials remain unpublished.
Editors' Summary
Background.
All health-care professionals want their patients to have the best available clinical care—but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a “New Drug Application” (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including “efficacy” trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) “outcomes.” FDA reviewers use this evidence to decide whether to approve a drug.
Why Was This Study Done?
Although the information in NDAs is publicly available, clinicians and patients usually learn about new drugs from articles published in medical journals after drug approval. Unfortunately, drug sponsors sometimes publish the results only of the trials in which their drug performed well and in which statistical analyses indicate that the drug's improved performance was a real effect rather than a lucky coincidence. Trials in which a drug did not show a “statistically significant benefit” or where the drug was found to have unwanted side effects often remain unpublished. This “publication bias” means that the scientific literature can contain an inaccurate picture of a drug's efficacy and safety relative to other therapies. This may lead to clinicians preferentially prescribing newer, more expensive drugs that are not necessarily better than older drugs. In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals. They also investigate whether there are any discrepancies between the trial data included in NDAs and in published articles.
What Did the Researchers Do and Find?
The researchers identified all the efficacy trials included in NDAs for totally new drugs that were approved by the FDA in 2001 and 2002 and searched the scientific literature for publications between July 2006 and June 2007 relating to these trials. Only three-quarters of the efficacy trials in the NDAs were published; trials with favorable outcomes were nearly five times as likely to be published as those without favorable outcomes. Although 155 primary outcomes were in both the papers and the NDAs, 41 outcomes were only in the NDAs. Conversely, 17 outcomes were only in the papers; 15 of these favored the test drug. Of the 43 primary outcomes reported in the NDAs that showed no statistically significant benefit for the test drug, only half were included in the papers; for five of the reported primary outcomes, the statistical significance differed between the NDA and the paper and generally favored the test drug in the papers. Finally, nine out of 99 conclusions differed between the NDAs and the papers; each time, the published conclusion favored the test drug.
What Do These Findings Mean?
These findings indicate that the results of many trials of new drugs are not published 5 years after FDA approval of the drug. Furthermore, unexplained discrepancies between the data and conclusions in NDAs and in medical journals are common and tend to paint a more favorable picture of the new drug in the scientific literature than in the NDAs. Overall, these findings suggest that the information on the efficacy of new drugs that is readily available to clinicians and patients through the published scientific literature is incomplete and potentially biased. The recent introduction in the US and elsewhere of mandatory registration of all clinical trials before they start and of mandatory publication in trial registers of the full results of all the predefined primary outcomes should reduce publication bias over the next few years and should allow clinicians and patients to make fully informed treatment decisions.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050217.
This study is further discussed in a PLoS Medicine Perspective by An-Wen Chan
PLoS Medicine recently published a related article by Ida Sim and colleagues: Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Med 5: e191. doi:10.1371/journal.pmed.0050191
The Food and Drug Administration provides information about drug approval in the US for consumers and for health-care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
NDAs for approved drugs can also be found on this Web site
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.0050217
PMCID: PMC2586350  PMID: 19067477
5.  Update on the Surgical Trial in Lobar Intracerebral Haemorrhage (STICH II): statistical analysis plan 
Trials  2012;13:222.
Background
Previous studies had suggested that the outcome for patients with spontaneous lobar intracerebral haemorrhage (ICH) and no intraventricular haemorrhage (IVH) might be improved with early evacuation of the haematoma. The Surgical Trial in Lobar Intracerebral Haemorrhage (STICH II) set out to establish whether a policy of earlier surgical evacuation of the haematoma in selected patients with spontaneous lobar ICH would improve outcome compared to a policy of initial conservative treatment. It is an international, multi-centre, prospective randomised parallel group trial of early surgery in patients with spontaneous lobar ICH. Outcome is measured at six months via a postal questionnaire.
Results
Recruitment to the study began on 27 November 2006 and closed on 15 August 2012 by which time 601 patients had been recruited. The protocol was published in Trials (http://www.trialsjournal.com/content/12/1/124/). This update presents the analysis plan for the study without reference to the unblinded data. The trial data will not be unblinded until after follow-up is completed in early 2013. The main trial results will be presented in spring 2013 with the aim to publish in a peer-reviewed journal at the same time.
Conclusion
The data from the trial will provide evidence on the benefits and risks of early surgery in patients with lobar ICH.
Trial registration
ISRCTN: ISRCTN22153967
doi:10.1186/1745-6215-13-222
PMCID: PMC3543336  PMID: 23171588
6.  Homeopathy for Depression: A Randomized, Partially Double-Blind, Placebo-Controlled, Four-Armed Study (DEP-HOM) 
PLoS ONE  2013;8(9):e74537.
Background
The specific clinical benefit of the homeopathic consultation and of homeopathic remedies in patients with depression has not yet been investigated.
Aims
To investigate the 1) specific effect of individualized homeopathic Q-potencies compared to placebo and 2) the effect of an extensive homeopathic case taking (case history I) compared to a shorter, rather conventional one (case history II) in the treatment of acute major depression (moderate episode) after six weeks.
Methods
A randomized, partially double-blind, placebo-controlled, four-armed trial using a 2×2 factorial design with a six-week study duration per patient was performed.
Results
A total of 44 from 228 planned patients were randomized (2∶1∶2∶1 randomization: 16 homeopathic Q-potencies/case history I, 7 placebo/case history I, 14 homeopathic Q-potencies/case history II, 7 placebo/case history II). Because of recruitment problems, the study was terminated prior to full recruitment, and was underpowered for the preplanned confirmatory hypothesis testing. Exploratory data analyses showed heterogeneous and inconclusive results with large variance in the sample. The mean difference for the Hamilton-D after 6 weeks was 2.0 (95%CI −1.2;5.2) for Q-potencies vs. placebo and −3.1 (−5.9;−0.2) for case history I vs. case history II. Overall, no consistent or clinically relevant results across all outcomes between homeopathic Q-potencies versus placebo and homeopathic versus conventional case taking were observed. The frequency of adverse events was comparable for all groups.
Conclusions
Although our results are inconclusive, given that recruitment into this trial was very difficult and we had to terminate early, we cannot recommend undertaking a further trial addressing this question in a similar setting.
Prof. Dr. Claudia Witt had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Trial registration
clinicaltrials.gov identifier NCT01178255.
Protocol publication: http://www.trialsjournal.com/content/12/1/43
doi:10.1371/journal.pone.0074537
PMCID: PMC3781106  PMID: 24086352
7.  Mothers After Gestational Diabetes in Australia Diabetes Prevention Program (MAGDA-DPP) post-natal intervention: an update to the study protocol for a randomized controlled trial 
Trials  2014;15:259.
Background
The Mothers After Gestational Diabetes in Australia Diabetes Prevention Program (MAGDA-DPP) is a randomized controlled trial (RCT) that aims to assess the effectiveness of a structured diabetes prevention intervention for women who had gestational diabetes.
Methods/Design
The original protocol was published in Trials (http://www.trialsjournal.com/content/14/1/339). This update reports on an additional exclusion criterion and change in first eligibility screening to provide greater clarity. The new exclusion criterion “surgical or medical intervention to treat obesity” has been added to the original protocol. The risks of developing diabetes will be affected by any medical or surgical intervention as its impact on obesity will alter the outcomes being assessed by MAGDA-DPP. The screening procedures have also been updated to reflect the current recruitment operation. The first eligibility screening is now taking place either during or after pregnancy, depending on recruitment strategy.
Trial registration
Australian New Zealand Clinical Trials Registry ANZCTRN 12610000338066.
doi:10.1186/1745-6215-15-259
PMCID: PMC4083860  PMID: 24981503
Gestational diabetes; Lifestyle intervention; Post-natal; Type 2 diabetes prevention
8.  Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis 
PLoS Medicine  2009;6(9):e1000144.
Joseph Ross and colleagues examine publication rates of clinical trials and find low rates of publication even following registration in Clinicaltrials.gov.
Background
ClinicalTrials.gov is a publicly accessible, Internet-based registry of clinical trials managed by the US National Library of Medicine that has the potential to address selective trial publication. Our objectives were to examine completeness of registration within ClinicalTrials.gov and to determine the extent and correlates of selective publication.
Methods and Findings
We examined reporting of registration information among a cross-section of trials that had been registered at ClinicalTrials.gov after December 31, 1999 and updated as having been completed by June 8, 2007, excluding phase I trials. We then determined publication status among a random 10% subsample by searching MEDLINE using a systematic protocol, after excluding trials completed after December 31, 2005 to allow at least 2 y for publication following completion. Among the full sample of completed trials (n = 7,515), nearly 100% reported all data elements mandated by ClinicalTrials.gov, such as intervention and sponsorship. Optional data element reporting varied, with 53% reporting trial end date, 66% reporting primary outcome, and 87% reporting trial start date. Among the 10% subsample, less than half (311 of 677, 46%) of trials were published, among which 96 (31%) provided a citation within ClinicalTrials.gov of a publication describing trial results. Trials primarily sponsored by industry (40%, 144 of 357) were less likely to be published when compared with nonindustry/nongovernment sponsored trials (56%, 110 of 198; p<0.001), but there was no significant difference when compared with government sponsored trials (47%, 57 of 122; p = 0.22). Among trials that reported an end date, 75 of 123 (61%) completed prior to 2004, 50 of 96 (52%) completed during 2004, and 62 of 149 (42%) completed during 2005 were published (p = 0.006).
Conclusions
Reporting of optional data elements varied and publication rates among completed trials registered within ClinicalTrials.gov were low. Without greater attention to reporting of all data elements, the potential for ClinicalTrials.gov to address selective publication of clinical trials will be limited.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that whenever they are ill, health care professionals will make sure they get the best available treatment. But how do clinicians know which treatment is most appropriate? In the past, clinicians used their own experience to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of the results of clinical trials, studies that investigate the efficacy and safety of medical interventions in people. However, evidence-based medicine can only be effective if all the results from clinical trials are published promptly in medical journals. Unfortunately, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished or only appear in the public domain many years after the drug has been approved for clinical use by the US Food and Drug Administration (FDA) and other governmental bodies.
Why Was This Study Done?
The extent of this “selective” publication, which can impair evidence-based clinical practice, remains unclear but is thought to be substantial. In this study, the researchers investigate the problem of selective publication by systematically examining the extent of publication of the results of trials registered in ClinicalTrials.gov, a Web-based registry of US and international clinical trials. ClinicalTrials.gov was established in 2000 by the US National Library of Medicine in response to the 1997 FDA Modernization Act. This act required preregistration of all trials of new drugs to provide the public with information about trials in which they might be able to participate. Mandatory data elements for registration in ClinicalTrials.gov initially included the trial's title, the condition studied in the trial, the trial design, and the intervention studied. In September 2007, the FDA Amendments Act expanded the mandatory requirements for registration in ClinicalTrials.gov by making it necessary, for example, to report the trial start date and to report primary and secondary outcomes (the effect of the intervention on predefined clinical measurements) in the registry within 2 years of trial completion.
What Did the Researchers Do and Find?
The researchers identified 7,515 trials that were registered within ClinicalTrials.gov after December 31, 1999 (excluding phase I, safety trials), and whose record indicated trial completion by June 8, 2007. Most of these trials reported all the mandatory data elements that were required by ClinicalTrials.gov before the FDA Amendments Act but reporting of optional data elements was less complete. For example, only two-thirds of the trials reported their primary outcome. Next, the researchers randomly selected 10% of the trials and, after excluding trials whose completion date was after December 31, 2005 (to allow at least two years for publication), determined the publication status of this subsample by systematically searching MEDLINE (an online database of articles published in selected medical and scientific journals). Fewer than half of the trials in the subsample had been published, and the citation for only a third of these publications had been entered into ClinicalTrials.gov. Only 40% of industry-sponsored trials had been published compared to 56% of nonindustry/nongovernment-sponsored trials, a difference that is unlikely to have occurred by chance. Finally, 61% of trials with a completion date before 2004 had been published, but only 42% of trials completed during 2005 had been published.
What Do These Findings Mean?
These findings indicate that, over the period studied, critical trial information was not included in the ClinicalTrials.gov registry. The FDA Amendments Act should remedy some of these shortcomings but only if the accuracy and completeness of the information in ClinicalTrials.gov is carefully monitored. These findings also reveal that registration in ClinicalTrials.gov does not guarantee that trial results will appear in a timely manner in the scientific literature. However, they do not address the reasons for selective publication (which may be, in part, because it is harder to publish negative results than positive results), and they are potentially limited by the methods used to discover whether trial results had been published. Nevertheless, these findings suggest that the FDA, trial sponsors, and the scientific community all need to make a firm commitment to minimize the selective publication of trial results to ensure that patients and clinicians have access to the information they need to make fully informed treatment decisions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000144.
PLoS Medicine recently published two related articles on selected publication by Ida Sim and colleagues and by Lisa Bero and colleagues and an editorial discussing the FDA Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The US Food and Drug Administration provides further information about drug approval in the US for consumers and health care professionals
doi:10.1371/journal.pmed.1000144
PMCID: PMC2728480  PMID: 19901971
9.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
10.  Evaluating adherence to the International Committee of Medical Journal Editors’ policy of mandatory, timely clinical trial registration 
Objective
To determine whether two specific criteria in Uniform Requirements for Manuscripts (URM) created by the International Committee of Medical Journal Editors (ICMJE)—namely, including the trial ID registration within manuscripts and timely registration of trials, are being followed.
Materials and methods
Observational study using computerized analysis of publicly available Medline article data and clinical trial registry data. We analyzed a purposive set of five ICMJE founding journals looking at all trial articles published in those journals during 2010–2011, and data from the ClinicalTrials.gov (CTG) trial registry. We measured adherence to trial ID inclusion policy as the percentage of trial journal articles that contained a valid trial ID within the article (journal-based sample). Adherence to timely registration was measured as the percentage of trials that registered the trial before enrolling the first participant within a 60-day grace period. We also examined timely registration rates by year of all phase II and higher interventional trials in CTG (registry-based sample).
Results
To determine trial ID inclusion, we analyzed 698 clinical trial articles in five journals. A total of 95.8% (661/690) of trial journal articles included the trial ID. In 88.3% the trial-article link is stored within a structured Medline field. To evaluate timely registration, we analyzed trials referenced by 451 articles from the selected five journals. A total of 60% (272/451) of articles were registered in a timely manner with an improving trend for trials initiated in later years (eg, 89% of trials that began in 2008 were registered in a timely manner). In the registry-based sample, the timely registration rates ranged from 56% for trials registered in 2006 to 72% for trials registered in 2011.
Discussion
Adherence to URM requirements for registration and trial ID inclusion increases the utility of PubMed and links it in an important way to clinical trial repositories. This new integrated knowledge source can facilitate research prioritization, clinical guidelines creation, and precision medicine.
Conclusions
The five selected journals adhere well to the policy of mandatory trial registration and also outperform the registry in adherence to timely registration. ICMJE's URM policy represents a unique international mandate that may be providing a powerful incentive for sponsors and investigators to document clinical trials and trial result publications and thus fulfill important obligations to trial participants and society.
doi:10.1136/amiajnl-2012-001501
PMCID: PMC3715364  PMID: 23396544
clinical trials as topic/legislation; registries; cross-sectional analysis; Databases; publication policy; trial registration
11.  Update on the collaborative interventions for circulation and depression (COINCIDE) trial: changes to planned methodology of a cluster randomized controlled trial of collaborative care for depression in people with diabetes and/or coronary heart disease 
Trials  2013;14:136.
Background
The COINCIDE trial aims to evaluate the effectiveness and cost-effectiveness of a collaborative care intervention for depression in people with diabetes and/or coronary heart disease attending English general practices.
Design
This update details changes to the cluster and patient recruitment strategy for the COINCIDE study. The original protocol was published in Trials (http://www.trialsjournal.com/content/pdf/1745-6215-13-139.pdf). Modifications were made to the recruitment targets in response to lower-than-expected patient recruitment at the first ten general practices recruited into the study. In order to boost patient numbers and retain statistical power, the number of general practices recruited was increased from 30 to 36. Follow-up period was shortened from 6 months to 4 months to ensure that patients recruited to the trial could be followed up by the end of the study.
Results
Patient recruitment began on the 01/05/2012 and is planned to be completed by the 30/04/2013. Recruitment for general practices was completed on 31/10/2012, by which time the target of 36 practices had been recruited. The main trial results will be published in a peer-reviewed journal.
Conclusion
The data from the trial will provide evidence on the effectiveness and cost-effectiveness of collaborative care for depression in people with diabetes and/or coronary heart disease.
Trial registration
Trial registration number: ISRCTN80309252
doi:10.1186/1745-6215-14-136
PMCID: PMC3660180  PMID: 23663556
Depression; Diabetes; Coronary heart disease; Primary care; Collaborative care
12.  United States Private-Sector Physicians and Pharmaceutical Contract Research: A Qualitative Study 
PLoS Medicine  2012;9(7):e1001271.
Jill Fisher and Corey Kalbaugh describe their findings from a qualitative research study evaluating the motivations of private-sector physicians conducting contract research for the pharmaceutical industry.
Background
There have been dramatic increases over the past 20 years in the number of nonacademic, private-sector physicians who serve as principal investigators on US clinical trials sponsored by the pharmaceutical industry. However, there has been little research on the implications of these investigators' role in clinical investigation. Our objective was to study private-sector clinics involved in US pharmaceutical clinical trials to understand the contract research arrangements supporting drug development, and specifically how private-sector physicians engaged in contract research describe their professional identities.
Methods and Findings
We conducted a qualitative study in 2003–2004 combining observation at 25 private-sector research organizations in the southwestern United States and 63 semi-structured interviews with physicians, research staff, and research participants at those clinics. We used grounded theory to analyze and interpret our data. The 11 private-sector physicians who participated in our study reported becoming principal investigators on industry clinical trials primarily because contract research provides an additional revenue stream. The physicians reported that they saw themselves as trial practitioners and as businesspeople rather than as scientists or researchers.
Conclusions
Our findings suggest that in addition to having financial motivation to participate in contract research, these US private-sector physicians have a professional identity aligned with an industry-based approach to research ethics. The generalizability of these findings and whether they have changed in the intervening years should be addressed in future studies.
Please see later in the article for the Editors' Summary.
Editors' Summary
Background
Before a new drug can be used routinely by physicians, it must be investigated in clinical trials—studies that test the drug's safety and effectiveness in people. In the past, clinical trials were usually undertaken in academic medical centers (institutes where physicians provide clinical care, do research, and teach), but increasingly, clinical trials are being conducted in the private sector as part of a growing contract research system. In the US, for example, most clinical trials completed in the 1980s took place in academic medical centers, but nowadays, more than 70% of trials are conducted by nonacademic (community) physicians working under contract to pharmaceutical companies. The number of private-sector nonacademic physicians serving as principal investigators (PIs) for US clinical trials (the PI takes direct responsibility for completion of the trial) increased from 4,000 in 1990 to 20,250 in 2010, and research contracts for clinical trials are now worth more than USṩ11 billion annually.
Why Was This Study Done?
To date, there has been little research on the implications of this change in the conduct of clinical trials. Academic PIs are often involved in both laboratory and clinical research and are therefore likely to identify closely with the science of trials. By contrast, nonacademic PIs may see clinical trials more as a business opportunity—pharmaceutical contract research is profitable to US physicians because they get paid for every step of the trial process. As a result, pharmaceutical companies may now have more control over clinical trial data and more opportunities to suppress negative data through selective publication of study results than previously. In this qualitative study, the researchers explore the outsourcing of clinical trials to private-sector research clinics through observations of, and in-depth interviews with, physicians and other research staff involved in the US clinical trials industry. A qualitative study collects non-quantitative data such as how physicians feel about doing contract research and about their responsibilities to their patients.
What Did the Researchers Do and Find?
Between October 2003 and September 2004, the researchers observed the interactions between PIs, trial coordinators (individuals who undertake many of the trial activities such as blood collection), and trial participants at 25 US research organizations in the southwestern US and interviewed 63 informants (including 12 PIs) about the trials they were involved in and their reasons for becoming involved. The researchers found that private-sector physicians became PIs on industry-sponsored clinical trials primarily because contract research was financially lucrative. The physicians perceived their roles in terms of business rather than science and claimed that they offered something to the pharmaceutical industry that academics do not—the ability to carry out a diverse range of trials quickly and effectively, regardless of their medical specialty. Finally, the physicians saw their primary ethical responsibility as providing accurate data to the companies that hired them and did not explicitly refer to their ethical responsibility to trial participants. One possible reason for this shift in ethical concerns is the belief among private-sector physicians that pharmaceutical companies must be making scientifically and ethically sound decisions when designing trials because of the amount of money they invest in them.
What Do These Findings Mean?
These findings suggest that private-sector physicians participate as PIs in pharmaceutical clinical trials primarily for financial reasons and see themselves as trial practitioners and businesspeople rather than as scientists. The accuracy of these findings is likely to be limited by the small number of PIs interviewed and by the time that has elapsed since the researchers collected their qualitative data. Moreover, these findings may not be generalizable to other regions of the US or to other countries. Nevertheless, they have potentially troubling implications for drug development. By hiring private-sector physicians who see themselves as involved more with the business than the science of contract research, pharmaceutical companies may be able to exert more control over the conduct of clinical trials and the publication of trial results than previously. Compared to the traditional investigatorinitiated system of clinical research, this new system of contract research means that clinical trials now lack the independence that is at the heart of best science practices, a development that casts doubt on the robustness of the knowledge being produced about the safety and effectiveness of new drugs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001271.
The ClinicalTrials.gov website is a searchable register of federally and privately supported clinical trials in the US; it provides information about all aspects of clinical trials
The US National Institutes of Health provides information about clinical trials, including personal stories about clinical trials from patients and researchers
The UK National Health Service Choices website has information for patients about clinical trials and medical research, including personal stories about participating in clinical trials
The UK Medical Research Council Clinical Trials Unit also provides information for patients about clinical trials and links to information on clinical trials provided by other organizations
MedlinePlus has links to further resources on clinical trials (in English and Spanish)
doi:10.1371/journal.pmed.1001271
PMCID: PMC3404112  PMID: 22911055
13.  Inadequate Dissemination of Phase I Trials: A Retrospective Cohort Study 
PLoS Medicine  2009;6(2):e1000034.
Background
Drug development is ideally a logical sequence in which information from small early studies (Phase I) is subsequently used to inform and plan larger, more definitive studies (Phases II–IV). Phase I trials are unique because they generally provide the first evaluation of new drugs in humans. The conduct and dissemination of Phase I trials have not previously been empirically evaluated. Our objective was to describe the initiation, completion, and publication of Phase I trials in comparison with Phase II–IV trials.
Methods and Findings
We reviewed a cohort of all protocols approved by a sample of ethics committees in France from January 1, 1994 to December 31, 1994. The comparison of 140 Phase I trials with 304 Phase II–IV trials, showed that Phase I studies were more likely to be initiated (133/140 [95%] versus 269/304 [88%]), more likely to be completed (127/133 [95%] versus 218/269 [81%]), and more likely to produce confirmatory results (71/83 [86%] versus 125/175 [71%]) than Phase II–IV trials. Publication was less frequent for Phase I studies (21/127 [17%] versus 93/218 [43%]), even if only accounting for studies providing confirmatory results (18/71 [25%] versus 79/125 [63%]).
Conclusions
The initiation, completion, and publications of Phase I trials are different from those of other studies. Moreover, the results of these trials should be published in order to ensure the integrity of the overall body of scientific knowledge, and ultimately the safety of future trial participants and patients.
François Chapuis and colleagues examine a cohort of clinical trial protocols approved by French ethics committees, and show that Phase I trials are less frequently published than other types of trials.
Editors' Summary
Background.
Before a new drug is used to treat patients, its benefits and harms have to be carefully investigated in clinical trials—studies that investigate the drug's effects on people. Because giving any new drug to people is potentially dangerous, drugs are first tested in a short “Phase I” trial in which a few people (usually healthy volunteers) are given doses of the drug likely to have a therapeutic effect. A Phase I trial evaluates the safety and tolerability of the drug and investigates how the human body handles the drug. It may also provide some information about the drug's efficacy that can guide the design of later trials. The next stage of clinical drug development is a Phase II trial in which the therapeutic efficacy of the drug is investigated by giving more patients and volunteers different doses of the drug. Finally, several large Phase III trials are undertaken to confirm the evidence collected in the Phase II trial about the drug's efficacy and safety. If the Phase III trials are successful, the drug will receive official marketing approval. In some cases, this approval requires Phase IV (postapproval) trials to be done to optimize the drug's use in clinical practice.
Why Was This Study Done?
In an ideal world, the results of all clinical trials on new drugs would be published in medical journals so that doctors and patients could make fully informed decisions about the treatments available to them. Unfortunately, this is not an ideal world and, for example, it is well known that the results of Phase III trials in which a new drug outperforms a standard treatment are more likely to be published than those in which the new drug performs badly or has unwanted side effects (an example of “publication bias”). But what about the results of Phase I trials? These need to be widely disseminated so that researchers can avoid unknowingly exposing people to potentially dangerous new drugs after similar drugs have caused adverse side effects. However, drug companies are often reluctant to disclose information on early phase trials. In this study, the researchers ask whether the dissemination of the results of Phase I trials is adequate.
What Did the Researchers Do and Find?
The researchers identified 667 drug trial protocols approved in 1994 by 25 French research ethics committees (independent panels of experts that ensure that the rights, safety, and well-being of trial participants are protected). In 2001, questionnaires were mailed to each trial's principal investigator asking whether the trial had been started and completed and whether its results had been published in a medical journal or otherwise disseminated (for example, by presentation at a scientific meeting). 140 questionnaires for Phase I trials and 304 for Phase II–IV trials were returned and analyzed by the investigators. They found that Phase I trials were more likely to have been started and to have been completed than Phase II–IV trials. The results of 86% of the Phase I studies matched the researchers' expectations, but the study hypothesis was confirmed in only 71% of the Phase II–IV trials. Finally, the results of 17% of the Phase I studies were published in scientific journals compared to 43% of the Phase II–IV studies. About half of the Phase I study results were not disseminated in any form.
What Do These Findings Mean?
These findings suggest that the fate of Phase I trials is different from that of other clinical trials and that there is inadequate dissemination of the results of these early trials. These findings may not be generalizable to other countries and may be affected by the poor questionnaire response rate. Nevertheless, they suggest that steps need to be taken to ensure that the results of Phase I studies are more widely disseminated. Recent calls by the World Health Organization and other bodies for mandatory preregistration in trial registries of all Phase I trials as well as all Phase II–IV trials should improve the situation by providing basic information about Phase I trials whose results are not published in full elsewhere.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000034.
Two recent research articles published in PLoS Medicine—by Ida Sim and colleagues (PLoS Med e191) and by Lisa Bero and colleagues (PLoS Med e217)—investigate publication bias in Phase III trials
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the US Food and Drug Administration (the body that approves drugs in the USA) Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.1000034
PMCID: PMC2642878  PMID: 19226185
14.  Financial Conflicts of Interest and Reporting Bias Regarding the Association between Sugar-Sweetened Beverages and Weight Gain: A Systematic Review of Systematic Reviews 
PLoS Medicine  2013;10(12):e1001578.
Maira Bes-Rastrollo and colleagues examine whether financial conflicts of interest are likely to bias conclusions from systematic reviews that investigate the relationship between sugar-sweetened beverages and weight gain or obesity.
Please see later in the article for the Editors' Summary
Background
Industry sponsors' financial interests might bias the conclusions of scientific research. We examined whether financial industry funding or the disclosure of potential conflicts of interest influenced the results of published systematic reviews (SRs) conducted in the field of sugar-sweetened beverages (SSBs) and weight gain or obesity.
Methods and Findings
We conducted a search of the PubMed, Cochrane Library, and Scopus databases to identify published SRs from the inception of the databases to August 31, 2013, on the association between SSB consumption and weight gain or obesity. SR conclusions were independently classified by two researchers into two groups: those that found a positive association and those that did not. These two reviewers were blinded with respect to the stated source of funding and the disclosure of conflicts of interest.
We identified 17 SRs (with 18 conclusions). In six of the SRs a financial conflict of interest with some food industry was disclosed. Among those reviews without any reported conflict of interest, 83.3% of the conclusions (10/12) were that SSB consumption could be a potential risk factor for weight gain. In contrast, the same percentage of conclusions, 83.3% (5/6), of those SRs disclosing some financial conflict of interest with the food industry were that the scientific evidence was insufficient to support a positive association between SSB consumption and weight gain or obesity. Those reviews with conflicts of interest were five times more likely to present a conclusion of no positive association than those without them (relative risk: 5.0, 95% CI: 1.3–19.3).
An important limitation of this study is the impossibility of ruling out the existence of publication bias among those studies not declaring any conflict of interest. However, the best large randomized trials also support a direct association between SSB consumption and weight gain or obesity.
Conclusions
Financial conflicts of interest may bias conclusions from SRs on SSB consumption and weight gain or obesity.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In our daily lives, we frequently rely on the results of scientific research to make decisions about our health. If we are healthy, we may seek out scientific advice about how much exercise to do to reduce our risk of a heart attack, and we may follow dietary advice issued by public health bodies to help us maintain a healthy weight. If we are ill, we expect our treatment to be based on the results of clinical trials and other studies. We assume that the scientific research that underlies our decisions about health-related issues is unbiased and accurate. However, there is increasing evidence that the conclusions of industry-sponsored scientific research are sometimes biased. So, for example, reports of drug trials sponsored by pharmaceutical companies sometimes emphasize the positive results of trials and “hide” unwanted side effects deep within the report or omit them altogether.
Why Was This Study Done?
Although the effects of company sponsors on the conclusions of pharmaceutical research have been extensively examined, little is known about the effects of industry sponsorship on nutrition research, even though large commercial entities are increasingly involved in global food and drink production. It is important to know whether the scientific evidence about nutrition is free of bias because biased information might negatively affect the health of entire populations. Moreover, scientific evidence from nutrition research underlies the formulation of governmental dietary guidelines and food-related public health interventions. In this systematic review, the researchers investigate whether the disclosure of potential financial conflicts of interest (for example, research funding by a beverage company) has influenced the results of systematic reviews undertaken to examine the association between the consumption of highly lucrative sugar-sweetened beverages (SSBs) and weight gain or obesity. Systematic reviews identify all the research on a given topic using predefined criteria. In an ideal world, systematic reviews provide access to all the available evidence on specific exposure–disease associations, but publication bias related to authors' conflicts of interest may affect the reliability of the conclusions of such studies.
What Did the Researchers Do and Find?
The researchers identified 18 conclusions from 17 systematic reviews that had investigated the association between SSB consumption and weight gain or obesity. In six of these reviews, a financial conflict of interest with a food industry was disclosed. Among the reviews that reported having no conflict of interest, 83.3% of the conclusions were that SSB consumption could be a potential risk factor for weight gain. By contrast, the same percentage of reviews in which a potential financial conflict of interest was disclosed concluded that the scientific evidence was insufficient to support a positive association between SSB consumption and weight gain, or reported contradictory results and did not state any definitive conclusion about the association between SSB consumption and weight gain. Reviews in which a potential conflict of interest was disclosed were five times more likely to present a conclusion of no positive association between SSB consumption and weight gain than reviews that reported having no financial conflict of interest.
What Do These Findings Mean?
These findings indicate that systematic reviews that reported financial conflicts of interest or sponsorship from food or drink companies were more likely to reach a conclusion of no positive association between SSB consumption and weight gain than reviews that reported having no conflicts of interest. A major limitation of this study is that it cannot assess which interpretation of the available evidence is truly accurate. For example, the scientists involved in the systematic reviews that reported having no conflict of interest may have had preexisting prejudices that affected their interpretation of their findings. However, the interests of the food industry (increased sales of their products) are very different from those of most researchers (the honest pursuit of knowledge), and recent randomized trials support a positive association between SSB consumption and overweight/obesity. Thus, these findings draw attention to possible inaccuracies in scientific evidence from research funded by the food and drink industry. They do not imply that industry sponsorship of nutrition research should be avoided entirely. Rather, as in other research areas, clear guidelines and principles (for example, sponsors should sign contracts that state that they will not be involved in the interpretation of results) need to be established to avoid dangerous conflicts of interest.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001578.
The Research Ethics Program at the University of California, San Diego provides an overview of conflicts of interest for researchers and details of US regulations and guidelines
The PLOS Medicine series on Big Food examines the activities and influence of the food industry in global health
A PLOS Medicine Research Article by Basu et al. uses mathematical modeling to investigate whether SSB taxation would avert obesity and diabetes in India
A 2012 policy brief from the Yale Rudd Center for Food Policy and Obesity discusses current evidence regarding SSB taxes
The US National Institutes of Health has regulations on financial conflicts of interest for institutions applying to receive funding
Wikipedia has pages on conflict of interest, reporting bias, systematic review, and SSBs (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001578
PMCID: PMC3876974  PMID: 24391479
15.  Vitamin A and fish oils for retinitis pigmentosa 
Background
Retinitis pigmentosa (RP) comprises a group of hereditary eye diseases characterized by progressive degeneration of retinal photoreceptors. It results in severe visual loss that may lead to legal blindness. Symptoms may become manifest during childhood or adulthood, and include poor night vision (nyctalopia) and constriction of peripheral vision (visual field loss). This field loss is progressive and usually does not reduce central vision until late in the disease course. The worldwide prevalence of RP is one in 4000, with 100,000 patients affected in the USA. At this time, there is no proven therapy for RP.
Objectives
The objective of this review was to synthesize the best available evidence regarding the effectiveness and safety of vitamin A and fish oils (docosahexaenoic acid (DHA)) in preventing the progression of RP.
Search methods
We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2013, Issue 7),Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to August 2013), EMBASE (January 1980 to August 2013), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to August 2013), the meta Register of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the WHO International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en).We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 20 August 2013.
Selection criteria
We included randomized controlled trials (RCTs) evaluating the effectiveness of vitamin A, fish oils (DHA) or both, as a treatment for RP. We excluded cluster-randomized trials and cross-over trials.
Data collection and analysis
We pre-specified the following outcomes: mean change from baseline visual field, mean change from baseline electroretinogram (ERG) amplitudes, and anatomic changes as measured by optical coherence tomography (OCT), at one year; as well as mean change in visual acuity at five-year follow-up. Two authors independently evaluated risk of bias for all included trials and extracted data from the publications. We also contacted study investigators for further information on trials with publications that did not report outcomes on all randomized patients.
Main results
We reviewed 394 titles and abstracts and nine ClinicalTrials.gov records and included three RCTs that met our eligibility criteria. The three trials included a total of 866 participants aged four to 55 years with RP of all forms of genetic predisposition. One trial evaluated the effect of vitamin A alone, one trial evaluated DHA alone, and a third trial evaluated DHA and vitamin A versus vitamin A alone. None of the RCTs had protocols available, so selective reporting bias was unclear for all. In addition, one trial did not specify the method for random sequence generation, so there was an unclear risk of bias. All three trials were graded as low risk of bias for all other domains. We did not perform meta-analysis due to clinical heterogeneity of participants and interventions across the included trials.
The primary outcome, mean change of visual field from baseline at one year, was not reported in any of the studies. No toxicity or adverse events were reported in these three trials. No trial reported a statistically significant benefit of vitamin supplementation on the progression of visual field loss or visual acuity loss. Two of the three trials reported statistically significant differences in ERG amplitudes among some subgroups of participants, but these results have not been replicated or substantiated by findings in any of the other trials.
Authors’ conclusions
Based on the results of three RCTs, there is no clear evidence for benefit of treatment with vitamin A and/or DHA for people with RP, in terms of the mean change in visual field and ERG amplitudes at one year and the mean change in visual acuity at five years follow-up. In future RCTs, since some of the studies in this review included unplanned subgroup analysis that suggested differential effects based on previous vitamin A exposure, investigators should consider examining this issue. Future trials should take into account the changes observed in ERG amplitudes and other outcome measures from trials included in this review, in addition to previous cohort studies, when calculating sample sizes to assure adequate power to detect clinically and statistically meaningful difference between treatment arms.
doi:10.1002/14651858.CD008428.pub2
PMCID: PMC4259575  PMID: 24357340
16.  Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data 
PLoS Medicine  2013;10(10):e1001526.
Beate Wieseler and colleagues compare the completeness of reporting of patient-relevant clinical trial outcomes between clinical study reports and publicly available data.
Please see later in the article for the Editors' Summary
Background
Access to unpublished clinical study reports (CSRs) is currently being discussed as a means to allow unbiased evaluation of clinical research. The Institute for Quality and Efficiency in Health Care (IQWiG) routinely requests CSRs from manufacturers for its drug assessments.
Our objective was to determine the information gain from CSRs compared to publicly available sources (journal publications and registry reports) for patient-relevant outcomes included in IQWiG health technology assessments (HTAs) of drugs.
Methods and Findings
We used a sample of 101 trials with full CSRs received for 16 HTAs of drugs completed by IQWiG between 15 January 2006 and 14 February 2011, and analyzed the CSRs and the publicly available sources of these trials. For each document type we assessed the completeness of information on all patient-relevant outcomes included in the HTAs (benefit outcomes, e.g., mortality, symptoms, and health-related quality of life; harm outcomes, e.g., adverse events). We dichotomized the outcomes as “completely reported” or “incompletely reported.” For each document type, we calculated the proportion of outcomes with complete information per outcome category and overall.
We analyzed 101 trials with CSRs; 86 had at least one publicly available source, 65 at least one journal publication, and 50 a registry report. The trials included 1,080 patient-relevant outcomes. The CSRs provided complete information on a considerably higher proportion of outcomes (86%) than the combined publicly available sources (39%). With the exception of health-related quality of life (57%), CSRs provided complete information on 78% to 100% of the various benefit outcomes (combined publicly available sources: 20% to 53%). CSRs also provided considerably more information on harms. The differences in completeness of information for patient-relevant outcomes between CSRs and journal publications or registry reports (or a combination of both) were statistically significant for all types of outcomes.
The main limitation of our study is that our sample is not representative because only CSRs provided voluntarily by pharmaceutical companies upon request could be assessed. In addition, the sample covered only a limited number of therapeutic areas and was restricted to randomized controlled trials investigating drugs.
Conclusions
In contrast to CSRs, publicly available sources provide insufficient information on patient-relevant outcomes of clinical trials. CSRs should therefore be made publicly available.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health care professionals will ensure that they get the best available treatment. In the past, clinicians used their own experience to make decisions about which treatments to offer their patients, but nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical trials, studies that investigate the benefits and harms of drugs and other medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results of clinical research are available for evaluation. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Both types of bias pose a substantial threat to informed medical decision-making.
Why Was This Study Done?
Recent initiatives, such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a precondition for publication in medical journals, aim to prevent these biases but are imperfect. Another way to facilitate the unbiased evaluation of clinical research might be to increase access to clinical study reports (CSRs)—detailed but generally unpublished accounts of clinical trials. Notably, information from CSRs was recently used to challenge conclusions based on published evidence about the efficacy and safety of the antiviral drug oseltamivir and the antidepressant reboxetine. In this study, the researchers compare the information available in CSRs and in publicly available sources (journal publications and registry reports) for the patient-relevant outcomes included in 16 health technology assessments (HTAs; analyses of the medical implications of the use of specific medical technologies) for drugs; the HTAs were prepared by the Institute for Quality and Efficiency in Health Care (IQWiG), Germany's main HTA agency.
What Did the Researchers Do and Find?
The researchers searched for published journal articles and registry reports for each of 101 trials for which the IQWiG had requested and received full CSRs from drug manufacturers during HTA preparation. They then assessed the completeness of information on the patient-relevant benefit and harm outcomes (for example symptom relief and adverse effects, respectively) included in each document type. Eighty-six of the included trials had at least one publicly available data source; the results of 15% of the trials were not available in either journals or registry reports. Overall, the CSRs provided complete information on 86% of the patient-related outcomes, whereas the combined publicly available sources provided complete information on only 39% of the outcomes. For individual outcomes, the CSRs provided complete information on 78%–100% of the benefit outcomes, with the exception of health-related quality of life (57%); combined publicly available sources provided complete information on 20%–53% of these outcomes. The CSRs also provided more information on patient-relevant harm outcomes than the publicly available sources.
What Do These Findings Mean?
These findings show that, for the clinical trials considered here, publicly available sources provide much less information on patient-relevant outcomes than CSRs. The generalizability of these findings may be limited, however, because the trials included in this study are not representative of all trials. Specifically, only CSRs that were voluntarily provided by drug companies were assessed, a limited number of therapeutic areas were covered by the trials, and the trials investigated only drugs. Nevertheless, these findings suggest that access to CSRs is important for the unbiased evaluation of clinical trials and for informed decision-making in health care. Notably, in June 2013, the European Medicines Agency released a draft policy calling for the proactive publication of complete clinical trial data (possibly including CSRs). In addition, the European Union and the European Commission are considering legal measures to improve the transparency of clinical trial data. Both these initiatives will probably only apply to drugs that are approved after January 2014, however, and not to drugs already in use. The researchers therefore call for CSRs to be made publicly available for both past and future trials, a recommendation also supported by the AllTrials initiative, which is campaigning for all clinical trials to be registered and fully reported.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001526.
Wikipedia has pages on evidence-based medicine, publication bias, and health technology assessment (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The ClinicalTrials.gov website is a searchable register of federally and privately supported clinical trials in the US; it provides information about all aspects of clinical trials
The European Medicines Agency (EMA) provides information about all aspects of the scientific evaluation and approval of new medicines in the European Union, and guidance on the preparation of clinical study reports; its draft policy on the release of data from clinical trials is available
Information about IQWiG is available (in English and German); Informed Health Online is a website provided by IQWiG that provides objective, independent, and evidence-based information for patients (also in English and German)
doi:10.1371/journal.pmed.1001526
PMCID: PMC3793003  PMID: 24115912
17.  Acupuncture for glaucoma 
Background
Glaucoma is a multifactorial optic neuropathy characterized by an acquired loss of retinal ganglion cells at levels beyond normal age-related loss and corresponding atrophy of the optic nerve. Although many treatments are available to manage glaucoma, glaucoma is a chronic condition. Some patients may seek complementary or alternative medicine approaches such as acupuncture to supplement their regular treatment. The underlying plausibility of acupuncture is that disorders related to the flow of Chi (the traditional Chinese concept translated as vital force or energy) can be prevented or treated by stimulating relevant points on the body surface.
Objectives
The objective of this review was to assess the effectiveness and safety of acupuncture in people with glaucoma.
Search methods
We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Group Trials Register) (The Cochrane Library 2012, Issue 12), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to January 2013), EMBASE (January 1980 to January 2013), Latin American and Caribbean Literature on Health Sciences (LILACS) (January 1982 to January 2013), Cumulative Index to Nursing and Allied Health Literature (CINAHL) (January 1937 to January 2013), ZETOC (January 1993 to January 2013), Allied and Complementary Medicine Database (AMED) (January 1985 to January 2013), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov), the WHO International Clinical Trials Registry Platform (IC-TRP) (www.who.int/ictrp/search/en) and the National Center for Complementary and Alternative Medicine web site (NCCAM) (http://nccam.nih.gov). We did not use any language or date restrictions in the search for trials. We last searched the electronic databases on 8 January 2013 with the exception of NCCAM which was last searched on 14 July 2010. We also handsearched Chinese medical journals at Peking Union Medical College Library in April 2007.
We searched the Chinese Acupuncture Trials Register, the Traditional Chinese Medical Literature Analysis and Retrieval System (TCMLARS), and the Chinese Biological Database (CBM) for the original review; we did not search these databases for the 2013 review update.
Selection criteria
We included randomized controlled trials (RCTs) in which one arm of the study involved acupuncture treatment.
Data collection and analysis
Two authors independently evaluated the search results and then full text articles against the eligibility criteria. We resolved discrepancies by discussion.
Main results
We included one completed and one ongoing trial, and recorded seven trials awaiting assessment for eligibility. These seven trials were written in Chinese and were identified from a systematic review on the same topic published in a Chinese journal. The completed trial compared auricular acupressure- a nonstandard acupuncture technique- with the sham procedure for glaucoma. This trial is rated at high risk of bias for masking of outcome assessors, unclear risk of bias for selective outcome reporting, and low risk of bias for other domains. The difference in intraocular pressure (measured in mm Hg) in the acupressure group was significantly less than that in the sham group at four weeks (−3.70, 95% confidence interval [CI] −7.11 to −0.29 for the right eye; −4.90, 95% CI −8.08 to −1.72 for the left eye), but was not statistically different at any other follow-up time points, including the longest follow-up time at eight weeks. No statistically significant difference in visual acuity was noted at any follow-up time points. The ongoing trial was registered with the International Clinical Trials Registry Platform (ICTRP) of the World Health Organization. To date this trial has not recruited any participants.
Authors’ conclusions
At this time, it is impossible to draw reliable conclusions from available data to support the use of acupuncture for the treatment of glaucoma. Because of ethical considerations, RCTs comparing acupuncture alone with standard glaucoma treatment or placebo are unlikely to be justified in countries where the standard of care has already been established. Because most glaucoma patients currently cared for by ophthalmologists do not use nontraditional therapy, clinical practice decisions will have to be based on physician judgments and patient preferences, given this lack of data in the literature. Inclusion of the seven Chinese trials in future updates of this review may change our conclusions.
doi:10.1002/14651858.CD006030.pub3
PMCID: PMC4260653  PMID: 23728656
Acupuncture Therapy [*methods]; Acupuncture, Ear; Glaucoma [*therapy]; Randomized Controlled Trials as Topic; Humans
18.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
19.  Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis 
PLoS Medicine  2008;5(9):e191.
Background
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.
Methods and Findings
We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.
Conclusions
Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs.
Editors' Summary
Background.
Before a new drug becomes available for the treatment of a specific human disease, its benefits and harms are carefully studied, first in the laboratory and in animals, and then in several types of clinical trials. In the most important of these trials—so-called “pivotal” clinical trials—the efficacy and safety of the new drug and of a standard treatment are compared by giving groups of patients the different treatments and measuring several predefined “outcomes.” These outcomes indicate whether the new drug is more effective than the standard treatment and whether it has any other effects on the patients' health and daily life. All this information is then submitted by the sponsor of the new drug (usually a pharmaceutical company) to the government body responsible for drug approval—in the US, this is the Food and Drug Administration (FDA).
Why Was This Study Done?
After a drug receives FDA approval, information about the clinical trials supporting the FDA's decision are included in the FDA “Summary Basis of Approval” and/or on the drug label. In addition, some clinical trials are described in medical journals. Ideally, all the clinical information that leads to a drug's approval should be publicly available to help clinicians make informed decisions about how to treat their patients. A full-length publication in a medical journal is the primary way that clinical trial results are communicated to the scientific community and the public. Unfortunately, drug sponsors sometimes publish the results only of trials where their drug performed well; as a consequence, trials where the drug did no better than the standard treatment or where it had unwanted side effects remain unpublished. Publication bias like this provides an inaccurate picture of a drug's efficacy and safety relative to other therapies and may lead to excessive prescribing of newer, more expensive (but not necessarily more effective) treatments. In this study, the researchers investigate whether selective trial reporting is common by evaluating the publication status of trials submitted to the FDA for a wide variety of approved drugs. They also ask which factors affect a trial's chances of publication.
What Did the Researchers Do and Find?
The researchers identified 90 drugs approved by the FDA between 1998 and 2000 by searching the FDA's Center for Drug Evaluation and Research Web site. From the Summary Basis of Approval for each drug, they identified 909 clinical trials undertaken to support these approvals. They then searched the published medical literature up to mid-2006 to determine if and when the results of each trial were published. Although 76% of the pivotal trials had appeared in medical journals, usually within 3 years of FDA approval, only 43% of all of the submitted trials had been published. Among all the trials, those with statistically significant results were nearly twice as likely to have been published as those without statistically significant results, and pivotal trials were three times more likely to have been published as nonpivotal trials, 5 years postapproval. In addition, a larger sample size increased the likelihood of publication. Having statistically significant results and larger sample sizes also increased the likelihood of publication of the pivotal trials.
What Do These Findings Mean?
Although the search methods used in this study may have missed some publications, these findings suggest that more than half the clinical trials undertaken to support drug approval remain unpublished 5 years or more after FDA approval. They also reveal selective reporting of results. For example, they show that a pivotal trial in which the new drug does no better than an old drug is less likely to be published than one where the new drug is more effective, a publication bias that could establish an inappropriately favorable record for the new drug in the medical literature. Importantly, these findings provide a baseline for monitoring the effects of the FDA Amendments Act 2007, which was introduced to improve the accuracy and completeness of drug trial reporting. Under this Act, all trials supporting FDA-approved drugs must be registered when they start, and the summary results of all the outcomes declared at trial registration as well as specific details about the trial protocol must be publicly posted within a year of drug approval on the US National Institutes of Health clinical trials site.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050191.
PLoS Medicine recently published an editorial discussing the FDA Amendment Act and what it means for medical journals: The PLoS Medicine Editors (2008) Next Stop, Don't Block the Doors: Opening Up Access to Clinical Trials Results. PLoS Med 5(7): e160
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward international norms and standards for reporting the findings of clinical trials
doi:10.1371/journal.pmed.0050191
PMCID: PMC2553819  PMID: 18816163
20.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
21.  Do urology journals enforce trial registration? A cross-sectional study of published trials 
BMJ Open  2011;1(2):e000430.
Objectives
(1) To assess endorsement of trial registration in author instructions of urology-related journals and (2) to assess whether randomised controlled trials (RCTs) in the field of urology were effectively registered.
Design
Cross-sectional study of author instructions and published trials.
Setting
Journals publishing in the field of urology.
Participants
First, the authors analysed author instructions of 55 urology-related journals indexed in ‘Journal Citation Reports 2009’ (12/2010). The authors divided these journals in two groups: those requiring and those not mentioning trial registration as a precondition for publication. Second, the authors chose the five journals with the highest impact factor (IF) from each group.
Intervention
MEDLINE search to identify RCTs published in these 10 journals in 2009 (01/2011); search of the clinical trials meta-search interface of WHO (International Clinical Trials Registry Platform) for RCTs that lacked information about registration (01–03/2011). Two authors independently assessed the information.
Outcome measures
Proportion of journals providing advice about trial registration and proportion of trials registered.
Results
Of 55 journals analysed, 26 (47.3%) provided some editorial advice about trial registration. Journals with higher IFs were more likely to mention trial registration explicitly (p=0.015). Of 106 RCTs published in 2009, 63 were registered (59.4%) with a tendency to an increase after 2005 (83.3%, p=0.035). 71.4% (30/42) of the RCTs that were published in journals mentioning and requiring registration, and 51.6% (33/64) of the RCTs that were published in journals that did not mention trial registration explicitly were registered. This difference was statistically significant (p=0.04).
Conclusions
The existence of a statement about trial registration in author instructions resulted in a higher proportion of registered RCTs in those journals. Journals with higher IFs were more likely to mention trial registration.
Article summary
Article focus
Trial registration can increase scientific transparency, but its implementation in specialty fields such as urology is unclear.
To assess the endorsement of trial registration in the author instructions of urology-related journals.
To assess whether randomised controlled trials in the field were effectively registered.
Key messages
A statement of trial registration in author instructions resulted in a higher proportion of registered randomised controlled trials.
Journals with high impact factors were more likely to mention trial registration.
We suggest, though, that ensuring trial registration is not the responsibility only of the editors. Medical scientists should realise that trial registration is necessary to contribute to transparency in research.
Strength and limitations of this study
Two authors independently assessed information regarding editorial advice about trial registration and identified the randomised controlled trials.
Potential bias occurred if registered randomised controlled trials were reported without giving a registration number and we could not identify them in the meta-search interface of WHO (International Clinical Trials Registry Platform).
Results might not be representative of the uro-nephrological field as a whole and reported figures may overestimate compliance with trial registration.
doi:10.1136/bmjopen-2011-000430
PMCID: PMC3236819  PMID: 22146890
22.  A systematic review of how homeopathy is represented in conventional and CAM peer reviewed journals 
Background
Growing popularity of complementary and alternative medicine (CAM) in the public sector is reflected in the scientific community by an increased number of research articles assessing its therapeutic effects. Some suggest that publication biases occur in mainstream medicine, and may also occur in CAM. Homeopathy is one of the most widespread and most controversial forms of CAM. The purpose of this study was to compare the representation of homeopathic clinical trials published in traditional science and CAM journals.
Methods
Literature searches were performed using Medline (PubMed), AMED and Embase computer databases. Search terms included "homeo-pathy, -path, and -pathic" and "clinical" and "trial". All articles published in English over the past 10 years were included. Our search yielded 251 articles overall, of which 46 systematically examined the efficacy of homeopathic treatment. We categorized the overall results of each paper as having either "positive" or "negative" outcomes depending upon the reported effects of homeopathy. We also examined and compared 15 meta-analyses and review articles on homeopathy to ensure our collection of clinical trials was reasonably comprehensive. These articles were found by inserting the term "review" instead of "clinical" and "trial".
Results
Forty-six peer-reviewed articles published in a total of 23 different journals were compared (26 in CAM journals and 20 in conventional journals). Of those in conventional journals, 69% reported negative findings compared to only 30% in CAM journals. Very few articles were found to be presented in a "negative" tone, and most were presented using "neutral" or unbiased language.
Conclusion
A considerable difference exists between the number of clinical trials showing positive results published in CAM journals compared with traditional journals. We found only 30% of those articles published in CAM journals presented negative findings, whereas over twice that amount were published in traditional journals. These results suggest a publication bias against homeopathy exists in mainstream journals. Conversely, the same type of publication bias does not appear to exist between review and meta-analysis articles published in the two types of journals.
doi:10.1186/1472-6882-5-12
PMCID: PMC1177924  PMID: 15955254
23.  Single Photon Emission Computed Tomography for the Diagnosis of Coronary Artery Disease 
Executive Summary
In July 2009, the Medical Advisory Secretariat (MAS) began work on Non-Invasive Cardiac Imaging Technologies for the Diagnosis of Coronary Artery Disease (CAD), an evidence-based review of the literature surrounding different cardiac imaging modalities to ensure that appropriate technologies are accessed by patients suspected of having CAD. This project came about when the Health Services Branch at the Ministry of Health and Long-Term Care asked MAS to provide an evidentiary platform on effectiveness and cost-effectiveness of non-invasive cardiac imaging modalities.
After an initial review of the strategy and consultation with experts, MAS identified five key non-invasive cardiac imaging technologies for the diagnosis of CAD. Evidence-based analyses have been prepared for each of these five imaging modalities: cardiac magnetic resonance imaging, single photon emission computed tomography, 64-slice computed tomographic angiography, stress echocardiography, and stress echocardiography with contrast. For each technology, an economic analysis was also completed (where appropriate). A summary decision analytic model was then developed to encapsulate the data from each of these reports (available on the OHTAC and MAS website).
The Non-Invasive Cardiac Imaging Technologies for the Diagnosis of Coronary Artery Disease series is made up of the following reports, which can be publicly accessed at the MAS website at: www.health.gov.on.ca/mas or at www.health.gov.on.ca/english/providers/program/mas/mas_about.html
Single Photon Emission Computed Tomography for the Diagnosis of Coronary Artery Disease: An Evidence-Based Analysis
Stress Echocardiography for the Diagnosis of Coronary Artery Disease: An Evidence-Based Analysis
Stress Echocardiography with Contrast for the Diagnosis of Coronary Artery Disease: An Evidence-Based Analysis
64-Slice Computed Tomographic Angiography for the Diagnosis of Coronary Artery Disease: An Evidence-Based Analysis
Cardiac Magnetic Resonance Imaging for the Diagnosis of Coronary Artery Disease: An Evidence-Based Analysis
Pease note that two related evidence-based analyses of non-invasive cardiac imaging technologies for the assessment of myocardial viability are also available on the MAS website:
Positron Emission Tomography for the Assessment of Myocardial Viability: An Evidence-Based Analysis
Magnetic Resonance Imaging for the Assessment of Myocardial Viability: an Evidence-Based Analysis
The Toronto Health Economics and Technology Assessment Collaborative has also produced an associated economic report entitled:
The Relative Cost-effectiveness of Five Non-invasive Cardiac Imaging Technologies for Diagnosing Coronary Artery Disease in Ontario [Internet]. Available from: http://theta.utoronto.ca/reports/?id=7
Objective
The objective of the analysis is to determine the diagnostic accuracy of single photon emission tomography (SPECT) in the diagnosis of coronary artery disease (CAD) compared to the reference standard of coronary angiography (CA). The analysis is primarily meant to allow for indirect comparisons between non-invasive strategies for the diagnosis of CAD, using CA as a reference standard.
SPECT
Cardiac SPECT, or myocardial perfusion scintigraphy (MPS), is a widely used nuclear, non-invasive image acquisition technique for investigating ischemic heart disease. SPECT is currently appropriate for all aspects of detecting and managing ischemic heart disease including diagnosis, risk assessment/stratification, assessment of myocardial viability, and the evaluation of left ventricular function. Myocardial perfusion scintigraphy was originally developed as a two-dimensional planar imaging technique, but SPECT acquisition has since become the clinical standard in current practice. Cardiac SPECT for the diagnosis of CAD uses an intravenously administered radiopharmaceutical tracer to evaluate regional coronary blood flow usually at rest and after stress. The radioactive tracers thallium (201Tl) or technetium-99m (99mTc), or both, may be used to visualize the SPECT acquisition. Exercise or a pharmacologic agent is used to achieve stress. After the administration of the tracer, its distribution within the myocardium (which is dependent on myocardial blood flow) is imaged using a gamma camera. In SPECT imaging, the gamma camera rotates around the patients for 10 to 20 minutes so that multiple two-dimensional projections are acquired from various angles. The raw data are then processed using computational algorithms to obtain three-dimensional tomographic images.
Since its inception, SPECT has evolved and its techniques/applications have become increasingly more complex and numerous. Accordingly, new techniques such as attenuation correction and ECG gating have been developed to correct for attenuation due to motion or soft-tissue artifact and to improve overall image clarity.
Research Questions
What is the diagnostic accuracy of SPECT for the diagnosis of CAD compared to the reference standard of CA?
Is SPECT cost-effective compared to other non-invasive cardiac imaging modalities for the diagnosis of CAD?
What are the major safety concerns with SPECT when used for the diagnosis of CAD?
Methods
A preliminary literature search was performed across OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for all systematic reviews/meta-analysis published between January 1, 2004 and August 22, 2009. A comprehensive systematic review was identified from this search and used as a basis for an updated search.
A second comprehensive literature search was then performed on October 30, 2009 across the same databases for studies published between January 1, 2002 and October 30, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also hand-searched for any additional studies.
Systematic reviews, meta-analyses, controlled clinical trials, and observational studies
Minimum sample size of 20 patients who completed coronary angiography
Use of CA as a reference standard for the diagnosis of CAD
Data available to calculate true positives (TP), false positives (FP), false negatives (FN) and true negatives (TN)
Accuracy data reported by patient not by segment
English language
Non-systematic reviews, case reports
Grey literature and abstracts
Trials using planar imaging only
Trials conducted in patients with non-ischemic heart disease
Studies done exclusively in special populations (e.g., patients with left branch bundle block, diabetics, minority populations) unless insufficient data available
Summary of Findings
Eighty-four observational studies, one non-randomized, single arm controlled clinical trial, and one poorly reported trial that appeared to be a randomized controlled trial (RCT) met the inclusion criteria for this review. All studies assessed the diagnostic accuracy of myocardial perfusion SPECT for the diagnosis of CAD using CA as a reference standard. Based on the results of these studies the following conclusions were made:
According to very low quality evidence, the addition of attenuation correction to traditional or ECG-gated SPECT greatly improves the specificity of SPECT for the diagnosis of CAD although this improvement is not statistically significant. A trend towards improvement of specificity was also observed with the addition of ECG gating to traditional SPECT.
According to very low quality evidence, neither the choice of stress agent (exercise or pharmacologic) nor the choice of radioactive tracer (technetium vs. thallium) significantly affect the diagnostic accuracy of SPECT for the diagnosis of CAD although a trend towards accuracy improvement was observed with the use of pharmacologic stress over exercise stress and technetium over thallium.
Considerably heterogeneity was observed both within and between trials. This heterogeneity may explain why some of the differences observed between accuracy estimates for various subgroups were not statistically significant.
More complex analytic techniques such as meta-regression may help to better understand which study characteristics significantly influence the diagnostic accuracy of SPECT.
PMCID: PMC3377554  PMID: 23074411
24.  A retrospective analysis of submissions, acceptance rate, open peer review operations, and prepublication bias of the multidisciplinary open access journal Head & Face Medicine 
Head & Face Medicine  2007;3:27.
Background
Head & Face Medicine (HFM) was launched in August 2005 to provide multidisciplinary science in the field of head and face disorders with an open access and open peer review publication platform. The objective of this study is to evaluate the characteristics of submissions, the effectiveness of open peer reviewing, and factors biasing the acceptance or rejection of submitted manuscripts.
Methods
A 1-year period of submissions and all concomitant journal operations were retrospectively analyzed. The analysis included submission rate, reviewer rate, acceptance rate, article type, and differences in duration for peer reviewing, final decision, publishing, and PubMed inclusion. Statistical analysis included Mann-Whitney U test, Chi-square test, regression analysis, and binary logistic regression.
Results
HFM received 126 articles (10.5 articles/month) for consideration in the first year. Submissions have been increasing, but not significantly over time. Peer reviewing was completed for 82 articles and resulted in an acceptance rate of 48.8%. In total, 431 peer reviewers were invited (5.3/manuscript), of which 40.4% agreed to review. The mean peer review time was 37.8 days. The mean time between submission and acceptance (including time for revision) was 95.9 days. Accepted papers were published on average 99.3 days after submission. The mean time between manuscript submission and PubMed inclusion was 101.3 days. The main article types submitted to HFM were original research, reviews, and case reports. The article type had no influence on rejection or acceptance. The variable 'number of invited reviewers' was the only significant (p < 0.05) predictor for rejection of manuscripts.
Conclusion
The positive trend in submissions confirms the need for publication platforms for multidisciplinary science. HFM's peer review time comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. Rejection of manuscripts was associated with the number of invited reviewers. None of the other parameters tested had any effect on the final decision. Thus, HFM's ethical policy, which is based on Open Access, Open Peer, and transparency of journal operations, is free of 'editorial bias' in accepting manuscripts.
Original data
Provided as a downloadable tab-delimited text file (URL and variable code available under section 'additional files').
doi:10.1186/1746-160X-3-27
PMCID: PMC1913501  PMID: 17562003
25.  Role of Editorial and Peer Review Processes in Publication Bias: Analysis of Drug Trials Submitted to Eight Medical Journals 
PLoS ONE  2014;9(8):e104846.
Background
Publication bias is generally ascribed to authors and sponsors failing to submit studies with negative results, but may also occur after submission. We evaluated whether submitted manuscripts on randomized controlled trials (RCTs) with drugs are more likely to be accepted if they report positive results.
Methods
Manuscripts submitted from January 2010 through April 2012 to one general medical journal (BMJ) and seven specialty journals (Annals of the Rheumatic Diseases, British Journal of Ophthalmology, Gut, Heart, Thorax, Diabetologia, and Journal of Hepatology) were included, if at least one study arm assessed the efficacy or safety of a drug and a statistical test was used to evaluate treatment effects. Publication status was retrospectively retrieved from submission systems or provided by journals. Sponsorship and trial results were extracted from manuscripts and classified according to predefined criteria. Main outcome measure was acceptance for publication.
Results
Of 15,972 manuscripts submitted, 472 (3.0%) were drug RCTs, of which 98 (20.8%) were published. Among submitted drug RCTs, 287 (60.8%) had positive and 185 (39.2%) negative results. Of these, 60 (20.9%) and 38 (20.5%), respectively, were published. Manuscripts on non-industry trials (n = 213) reported positive results in 138 (64.8%) manuscripts, compared to 71 (47.7%) on industry-supported trials (n = 149), and 78 (70.9%) on industry-sponsored trials (n = 110). Twenty-seven (12.7%) non-industry trials were published, compared to 27 (18.1%) industry-supported and 44 (40.0%) industry-sponsored trials. After adjustment for other trial characteristics, manuscripts reporting positive results were not more likely to be published (OR, 1.00; 95% CI, 0.61 to 1.66). Submission to specialty journals, sample size, multicentre status, journal impact factor, and corresponding authors from Europe or US were significantly associated with publication.
Conclusions
For the selected journals, there was no tendency to preferably publish manuscripts on drug RCTs that reported positive results, suggesting that publication bias may occur mainly prior to submission.
doi:10.1371/journal.pone.0104846
PMCID: PMC4130599  PMID: 25118182

Results 1-25 (939559)