PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (48)
 

Clipboard (0)
None

Select a Filter Below

Journals
more »
Year of Publication
more »
1.  Assessment of a Standardized Pre-Operative Telephone Checklist Designed to Avoid Late Cancellation of Ambulatory Surgery: The AMBUPROG Multicenter Randomized Controlled Trial 
PLoS ONE  2016;11(2):e0147194.
Objectives
To assess the impact of a standardized pre-operative telephone checklist on the rate of late cancellations of ambulatory surgery (AMBUPROG trial).
Design
Multicenter, two-arm, parallel-group, open-label randomized controlled trial.
Setting
11 university hospital ambulatory surgery units in Paris, France.
Participants
Patients scheduled for ambulatory surgery and able to be reached by telephone.
Intervention
A 7-item checklist designed to prevent late cancellation, available in five languages and two versions (for children and adults), was administered between 7 and 3 days before the planned date of surgery, by an automated phone system or a research assistant. The control group received standard management alone.
Main Outcome Measures
Rate of cancellation on the day of surgery or the day before.
Results
The study population comprised 3900 patients enrolled between November 2012 and September 2013: 1950 patients were randomized to the checklist arm and 1950 patients to the control arm. The checklist was administered to 68.8% of patients in the intervention arm, 1002 by the automated phone system and 340 by a research assistant. The rate of late cancellation did not differ significantly between the checklist and control arms (109 (5.6%) vs. 113 (5.8%), adjusted odds ratio [95% confidence interval] = 0.91 [0.65–1.29], (p = 0.57)). Checklist administration revealed that 355 patients (28.0%) had not undergone tests ordered by the surgeon or anesthetist, and that 254 patients (20.0%) still had questions concerning the fasting state.
Conclusions
A standardized pre-operative telephone checklist did not avoid late cancellations of ambulatory surgery but enabled us to identify several frequent causes.
Trial Registration
ClinicalTrials.gov NCT01732159
doi:10.1371/journal.pone.0147194
PMCID: PMC4734771  PMID: 26829478
2.  Public availability of results of observational studies evaluating an intervention registered at ClinicalTrials.gov 
BMC Medicine  2016;14:7.
Background
Observational studies are essential for assessing safety. The aims of this study were to evaluate whether results of observational studies evaluating an intervention with safety outcome(s) registered at ClinicalTrials.gov were published and, if not, whether they were available through posting on ClinicalTrials.gov or the sponsor website.
Methods
We identified a cohort of observational studies with safety outcome(s) registered on ClinicalTrials.gov after October 1, 2007, and completed between October 1, 2007, and December 31, 2011. We systematically searched PubMed for a publication, as well as ClinicalTrials.gov and the sponsor website for results. The main outcomes were the time to the first publication in journals and to the first public availability of the study results (i.e. published or posted on ClinicalTrials.gov or the sponsor website). For all studies with results publicly available, we evaluated the completeness of reporting (i.e. reported with the number of events per arm) of safety outcomes.
Results
We identified 489 studies; 334 (68 %) were partially or completely funded by industry. Results for only 189 (39 %, i.e. 65 % of the total target number of participants) were published at least 30 months after the study completion. When searching other data sources, we obtained the results for 53 % (n = 158; i.e. 93 % of the total target number of participants) of unpublished studies; 31 % (n = 94) were posted on ClinicalTrials.gov and 21 % (n = 64) on the sponsor website. As compared with non-industry-funded studies, industry-funded study results were less likely to be published but not less likely to be publicly available. Of the 242 studies with a primary outcome recorded as a safety issue, all these outcomes were adequately reported in 86 % (114/133) when available in a publication, 91 % (62/68) when available on ClinicalTrials.gov, and 80 % (33/41) when available on the sponsor website.
Conclusions
Only 39 % of observational studies evaluating an intervention with safety outcome(s) registered at ClinicalTrials.gov had their results published at least 30 months after study completion. The registration of these observational studies allowed searching other sources (results posted at ClinicalTrials.gov and sponsor website) and obtaining results for half of unpublished studies and 93 % of the total target number of participants.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-016-0551-4) contains supplementary material, which is available to authorized users.
doi:10.1186/s12916-016-0551-4
PMCID: PMC4730754  PMID: 26819213
Observational studies; Trial registration; Waste in research
4.  Interpretation of Results of Studies Evaluating an Intervention Highlighted in Google Health News: A Cross-Sectional Study of News 
PLoS ONE  2015;10(10):e0140889.
Background
Mass media through the Internet is a powerful means of disseminating medical research. We aimed to determine whether and how the interpretation of research results is misrepresented by the use of “spin” in the health section of Google News. Spin was defined as specific way of reporting, from whatever motive (intentional or unintentional), to emphasize that the beneficial effect of the intervention is greater than that shown by the results.
Methods
We conducted a cross-sectional study of news highlighted in the health section of US, UK and Canada editions of Google News between July 2013 and January 2014. We searched for news items for 3 days a week (i.e., Monday, Wednesday, and Friday) during 6 months and selected a sample of 130 news items reporting a scientific article evaluating the effect of an intervention on human health.
Results
In total, 78% of the news did not provide a full reference or electronic link to the scientific article. We found at least one spin in 114 (88%) news items and 18 different types of spin in news. These spin were mainly related to misleading reporting (59%) such as not reporting adverse events that were reported in the scientific article (25%), misleading interpretation (69%) such as claiming a causal effect despite non-randomized study design (49%) and overgeneralization/misleading extrapolation (41%) of the results such as extrapolating a beneficial effect from an animal study to humans (21%). We also identified some new types of spin such as highlighting a single patient experience for the success of a new treatment instead of focusing on the group results.
Conclusions
Interpretation of research results was frequently misrepresented in the health section of Google News. However, we do not know whether these spin were from the scientific articles themselves or added in the news.
doi:10.1371/journal.pone.0140889
PMCID: PMC4608738  PMID: 26473725
5.  Classification and prevalence of spin in abstracts of non-randomized studies evaluating an intervention 
Background
Spin represents specific reporting strategies, either intentional or unintentional, to convince the reader that the beneficial effect of the experimental intervention in terms of efficacy and safety is greater than that shown by the results. The objectives of this study were to 1) develop a classification of spin specific to non-randomized studies assessing an intervention and 2) estimate the prevalence of spin in abstracts of reports of such studies.
Methods
In a first step, we developed a specific classification of spin for non-randomized studies by a literature review and pilot study. In a second step, 2 researchers trained in the field of methodology evaluated the prevalence of spin in the abstract of all non-randomized studies assessing an intervention published in the BioMed Central Medical Series journals between January 1, 2011 and December 31, 2013. All disagreements were resolved by consensus. We also determined whether the level of spin in abstract conclusions was high (spin reported without uncertainty or recommendations for further trials), moderate (spin reported with some uncertainty or recommendations for further trials) or low (spin reported with uncertainty and recommendations for further trials).
Results
Among the 128 assessed articles assessed, 107 (84 %) had at least one example of spin in their abstract. The most prevalent strategy of spin was the use of causal language, identified in 68 (53 %) abstracts. Other frequent strategies were linguistic spin, inadequate implications for clinical practice, and lack of focus on harm, identified in 33 (26 %), 25 (20 %), and 34 (27 %) abstracts respectively. Abstract conclusions of 61 (48 %) articles featured a high level of spin.
Conclusion
Abstract of reports of non-randomized studies assessing an intervention frequently includes spin. Efforts to reduce the prevalence of spin in abstract for such studies are needed.
Electronic supplementary material
The online version of this article (doi:10.1186/s12874-015-0079-x) contains supplementary material, which is available to authorized users.
doi:10.1186/s12874-015-0079-x
PMCID: PMC4604617  PMID: 26462565
6.  Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial 
BMC Medicine  2015;13:221.
Background
Incomplete reporting is a frequent waste in research. Our aim was to evaluate the impact of a writing aid tool (WAT) based on the CONSORT statement and its extension for non-pharmacologic treatments on the completeness of reporting of randomized controlled trials (RCTs).
Methods
We performed a ‘split-manuscript’ RCT with blinded outcome assessment. Participants were masters and doctoral students in public health. They were asked to write, over a 4-hour period, the methods section of a manuscript based on a real RCT protocol, with a different protocol provided to each participant. Methods sections were divided into six different domains: ‘trial design’, ‘randomization’, ‘blinding’, ‘participants’, ‘interventions’, and ‘outcomes’. Participants had to draft all six domains with access to the WAT for a random three of six domains. The random sequence was computer-generated and concealed. For each domain, the WAT comprised reminders of the corresponding CONSORT item(s), bullet points detailing all the key elements to be reported, and examples of good reporting. The control intervention consisted of no reminders. The primary outcome was the mean global score for completeness of reporting (scale 0–10) for all domains written with or without the WAT.
Results
Forty-one participants wrote 41 different manuscripts of RCT methods sections, corresponding to 246 domains (six for each of the 41 protocols). All domains were analyzed. For the primary outcome, the mean (SD) global score for completeness of reporting was higher with than without use of the WAT: 7.1 (1.2) versus 5.0 (1.6), with a mean (95 % CI) difference 2.1 (1.5–2.7; P <0.01). Completeness of reporting was significantly higher with the WAT for all domains except for blinding and outcomes.
Conclusion
Use of the WAT could improve the completeness of manuscripts reporting the results of RCTs.
Trial registration
Clinicaltrials.gov (http://clinicaltrials.govNCT02127567, registration date first received April 29, 2014)
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0460-y) contains supplementary material, which is available to authorized users.
doi:10.1186/s12916-015-0460-y
PMCID: PMC4570037  PMID: 26370288
Clinical epidemiology; CONSORT statement; Randomized controlled trial; Reporting guidelines; Transparency
7.  Consensus on Severity for Ocular Emergency: The BAsic SEverity Score for Common OculaR Emergencies [BaSe SCOrE] 
Journal of Ophthalmology  2015;2015:576983.
Purpose. To weigh ocular emergency events according to their severity. Methods. A group of ophthalmologists and researchers rated the severity of 86 common ocular emergencies using a Delphi consensus method. The ratings were attributed on a 7-point scale throughout a first-round survey. Then, the experts were provided with the median and quartiles of the ratings of each item to reevaluate the severity levels being aware of the group's first-round responses. The final severity rating for each item corresponded to the median rating provided by the last Delphi round. Results. We invited 398 experts, and 80 (20%) of them, from 18 different countries, agreed to participate. A consensus was reached in the second round, completed by 24 experts (43%). The severity ranged from subconjunctival hemorrhages (median = 1, Q1 = 0; Q3 = 1) to penetrating eye injuries collapsing the eyeball with intraocular foreign body or panophthalmitis with infection following surgery (median = 5, Q1 = 5; Q3 = 6). The ratings did not differ according to the practice of the experts. Conclusion. These ratings could be used to assess the severity of ocular emergency events, to serve in composite algorithms for emergency triage and standardizing research in ocular emergencies.
doi:10.1155/2015/576983
PMCID: PMC4534620  PMID: 26294965
8.  The most important tasks for peer reviewers evaluating a randomized controlled trial are not congruent with the tasks most often requested by journal editors 
BMC Medicine  2015;13:158.
Background
The peer review process is a cornerstone of biomedical research publications. However, it may fail to allow the publication of high-quality articles. We aimed to identify and sort, according to their importance, all tasks that are expected from peer reviewers when evaluating a manuscript reporting the results of a randomized controlled trial (RCT) and to determine which of these tasks are clearly requested by editors in their recommendations to peer reviewers.
Methods
We identified the tasks expected of peer reviewers from 1) a systematic review of the published literature and 2) recommendations to peer reviewers for 171 journals (i.e., 10 journals with the highest impact factor for 14 different medical areas and all journals indexed in PubMed that published more than 15 RCTs over 3 months regardless of the medical area). Participants who had peer-reviewed at least one report of an RCT had to classify the importance of each task relative to other tasks using a Q-sort technique. Finally, we evaluated editors’ recommendations to authors to determine which tasks were clearly requested by editors in their recommendations to peer reviewers.
Results
The Q-sort survey was completed by 203 participants, 93 (46 %) with clinical expertise, 72 (36 %) with methodological/statistical expertise, 17 (8 %) with expertise in both areas, and 21 (10 %) with other expertise. The task rated most important by participants (evaluating the risk of bias) was clearly requested by only 5 % of editors. In contrast, the task most frequently requested by editors (provide recommendations for publication), was rated in the first tertile only by 21 % of all participants.
Conclusions
The most important tasks for peer reviewers were not congruent with the tasks most often requested by journal editors in their guidelines to reviewers.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0395-3) contains supplementary material, which is available to authorized users.
doi:10.1186/s12916-015-0395-3
PMCID: PMC4491236  PMID: 26141137
Peer review; Q-sort; Randomized controlled trials; Recommendations to reviewers
9.  A systematic review of the use of an expertise-based randomised controlled trial design 
Trials  2015;16:241.
Background
Under a conventional two-arm randomised trial design, participants are allocated to an intervention and participating health professionals are expected to deliver both interventions. However, health professionals often have differing levels of expertise in a skill-based interventions such as surgery or psychotherapy. An expertise-based approach to trial design, where health professionals only deliver an intervention in which they have expertise, has been proposed as an alternative. The aim of this project was to systematically review the use of an expertise-based trial design in the medical literature.
Methods
We carried out a comprehensive search of nine databases—AMED, BIOSIS, CENTRAL, CINAHL, Cochrane Methodology Register, EMBASE, MEDLINE, Science Citation Index, and PsycINFO—from 1966 to 2012 and performed citation searches using the ISI Citation Indexes and Scopus. Studies that used an expertise-based trial design were included. Two review authors independently screened the titles and abstracts and assessed full-text reports. Data were extracted and summarised on the study characteristics, general and expertise-specific study methodology, and conduct.
Results
In total, 7476 titles and abstracts were identified, leading to 43 included studies (54 articles). The vast majority (88 %) used a pure expertise-based design; three (7 %) adopted a hybrid design, and two (5 %) used a design that was unclear. Most studies compared substantially different interventions (79 %). In many cases, key information relating to the expertise-based design was absent; only 12 (28 %) reported criteria for delivering both interventions. Most studies recruited the target sample size or very close to it (median of 101, interquartile range of 94 to 118), although the target was reported for only 40 % of studies. The proportion of participants who received the allocated intervention was high (92 %, interquartile range of 82 to 99 %).
Conclusions
While use of an expertise-based trial design is growing, it remains uncommon. Reporting of study methodology and, particularly, expertise-related methodology was poor. Empirical evidence provided some support for purported benefits such as high levels of recruitment and compliance with allocation. An expertise-based trial design should be considered but its value seems context-specific, particularly when interventions differ substantially or interventions are typically delivered by different health professionals.
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-015-0739-5) contains supplementary material, which is available to authorized users.
doi:10.1186/s13063-015-0739-5
PMCID: PMC4468810  PMID: 26025450
Expertise-based; Expertise; Systematic review; Learning; Randomised controlled trial; Trial design; Non-pharmacological interventions; Surgery
10.  Avoidable waste of research related to inadequate methods in clinical trials 
Objective To assess the waste of research related to inadequate methods in trials included in Cochrane reviews and to examine to what extent this waste could be avoided. A secondary objective was to perform a simulation study to re-estimate this avoidable waste if all trials were adequately reported.
Design Methodological review and simulation study.
Data sources Trials included in the meta-analysis of the primary outcome of Cochrane reviews published between April 2012 and March 2013.
Data extraction and synthesis We collected the risk of bias assessment made by the review authors for each trial. For a random sample of 200 trials with at least one domain at high risk of bias, we re-assessed risk of bias and identified all related methodological problems. For each problem, possible adjustments were proposed that were then validated by an expert panel also evaluating their feasibility (easy or not) and cost. Avoidable waste was defined as trials with at least one domain at high risk of bias for which easy adjustments with no or minor cost could change all domains to low risk. In the simulation study, after extrapolating our re-assessment of risk of bias to all trials, we considered each domain rated as unclear risk of bias as missing data and used multiple imputations to determine whether they were at high or low risk.
Results Of 1286 trials from 205 meta-analyses, 556 (43%) had at least one domain at high risk of bias. Among the sample of 200 of these trials, 142 were confirmed as high risk; in these, we identified 25 types of methodological problem. Adjustments were possible in 136 trials (96%). Easy adjustments with no or minor cost could be applied in 71 trials (50%), resulting in 17 trials (12%) changing to low risk for all domains. So the avoidable waste represented 12% (95% CI 7% to 18%) of trials with at least one domain at high risk. After correcting for incomplete reporting, avoidable waste due to inadequate methods was estimated at 42% (95% CI 36% to 49%).
Conclusions An important burden of wasted research is related to inadequate methods. This waste could be partly avoided by simple and inexpensive adjustments.
doi:10.1136/bmj.h809
PMCID: PMC4372296  PMID: 25804210
11.  Impact of adding a limitations section to abstracts of systematic reviews on readers’ interpretation: a randomized controlled trial 
Background
To allow an accurate evaluation of abstracts of systematic reviews, the PRISMA Statement recommends that the limitations of the evidence (e.g., risk of bias, publication bias, inconsistency, imprecision) should be described in the abstract. We aimed to evaluate the impact of adding such limitations sections on reader’s interpretation.
Method
We performed a two-arm parallel group randomized controlled trial (RCT) using a sample of 30 abstracts of systematic reviews evaluating the effects of healthcare intervention with conclusions favoring the beneficial effect of the experimental treatments. Two formats of these abstracts were derived: one reported without and one with a standardized limitations section written according to the PRISMA statement for abstracts. The primary outcome was readers’ confidence in the results of the systematic review as stated in the abstract assessed by a Likert scale from 0, not at all confident, to 10, very confident. In total, 300 participants (corresponding authors of RCT reports indexed in PubMed) were randomized by a web-based randomization procedure to interpret one abstract with a limitations section (n = 150) or without a limitations section (n = 150). Participants were blinded to the study hypothesis.
Results
Adding a limitations section did not modify readers’ interpretation of findings in terms of confidence in the results (mean difference [95% confidence interval] 0.19 [−0.37–0.74], p = 0.50), confidence in the validity of the conclusions (0.07 [−0.49–0.62], p = 0.80), or benefit of the experimental intervention (0.12 [−0.42–0.44], p = 0.65).
This study is limited because the participants were expert-readers and are not representative of all systematic review readers.
Conclusion
Adding a limitations section to abstracts of systematic reviews did not affect readers’ interpretation of the abstract results. Other studies are needed to confirm the results and explore the impact of a limitations section on a less expert panel of participants.
Trial registration
ClinicalTrial.gov (NCT01848782).
Electronic supplementary material
The online version of this article (doi:10.1186/1471-2288-14-123) contains supplementary material, which is available to authorized users.
doi:10.1186/1471-2288-14-123
PMCID: PMC4247631  PMID: 25420433
Meta-analysis; Systematic review; Bias; Limits; Limitation; Interpretation; Interpretation bias; Misinterpretation; Abstract; Results
12.  Assessing bias in osteoarthritis trials included in Cochrane reviews: protocol for a meta-epidemiological study 
BMJ Open  2014;4(10):e005491.
Introduction
The validity of systematic reviews and meta-analysis depends on methodological quality and unbiased dissemination of trials. Our objective is to evaluate the association of estimates of treatment effects with different bias-related study characteristics in meta-analyses of interventions used for treating pain in osteoarthritis (OA). From the findings, we hope to consolidate guidance on interpreting OA trials in systematic reviews based on empirical evidence from Cochrane reviews.
Methods and analysis
Only systematic reviews that compare experimental interventions with sham, placebo or no intervention control will be considered eligible. Bias will be assessed with the risk of bias tool, used according to the Cochrane Collaboration’s recommendations. Furthermore, center status, trial size and funding will be assessed. The primary outcome (pain) will be abstracted from the first appearing forest plot for overall pain in the Cochrane review. Treatment effect sizes will be expressed as standardised mean differences (SMDs), where the difference in mean values available from the forest plots is divided by the pooled SD. To empirically assess the risk of bias in treatment benefits, we will perform stratified analyses of the trials from the included meta-analyses and assess the interaction between trial characteristics and treatment effect. A relevant study-level covariate is defined as one that decreases the between-study variance (τ2, estimated as Tau-squared) as a consequence of inclusion in the mixed effects statistical model.
Ethics and dissemination
Meta-analyses and randomised controlled trials provide the most reliable basis for treatment of patients with OA, but the actual impact of bias is unclear. This study will systematically examine the methodological quality in OA Cochrane reviews and explore the effect estimates behind possible bias. Since our study does not collect primary data, no formal ethical assessment and informed consent are required.
Trial registration number
PROSPERO (CRD42013006924).
doi:10.1136/bmjopen-2014-005491
PMCID: PMC4187994  PMID: 25280805
osteoarthritis; meta-analysis; meta-epidemiology; risk of bias
13.  Impact of Osteopathic Treatment on Pain in Adult Patients with Cystic Fibrosis – A Pilot Randomized Controlled Study 
PLoS ONE  2014;9(7):e102465.
Background
Pain is a common complication in patients with cystic fibrosis (CF) and is associated with shorter survival. We evaluated the impact of osteopathic manipulative treatment (OMT) on pain in adults with CF.
Methods
A pilot multicenter randomized controlled trial was conducted with three parallel arms: OMT (group A, 16 patients), sham OMT (sham treatment, group B, 8 patients) and no treatment (group C, 8 patients). Medical investigators and patients were double-blind to treatment for groups A and B, who received OMT or sham OMT monthly for 6 months. Pain was rated as a composite of its intensity and duration over the previous month. The evolution of chest/back pain after 6 months was compared between group A and groups B+C combined (control group). The evolution of cervical pain, headache and quality of life (QOL) were similarly evaluated.
Results
There was no statistically significant difference between the treatment and control groups in the decrease of chest/back pain (difference = −2.20 IC95% [−4.81; 0.42], p = 0.098); also, group A did not differ from group B. However, chest/back pain decreased more in groups A (p = 0.002) and B (p = 0.006) than in group C. Cervical pain, headache and QOL scores did not differ between the treatment and control groups.
Conclusion
This pilot study demonstrated the feasibility of evaluating the efficacy of OMT to treat the pain of patients with CF. The lack of difference between the group treated with OMT and the control group may be due to the small number of patients included in this trial, which also precludes any definitive conclusion about the greater decrease of pain in patients receiving OMT or sham OMT than in those with no intervention.
Trial Registration
ClinicalTrials.gov NCT01293019
doi:10.1371/journal.pone.0102465
PMCID: PMC4100932  PMID: 25029347
14.  Reporting funding source or conflict of interest in abstracts of randomized controlled trials, no evidence of a large impact on general practitioners’ confidence in conclusions, a three-arm randomized controlled trial 
BMC Medicine  2014;12:69.
Background
Systematic reporting of funding sources is recommended in the CONSORT Statement for abstracts. However, no specific recommendation is related to the reporting of conflicts of interest (CoI). The objective was to compare physicians’ confidence in the conclusions of abstracts of randomized controlled trials of pharmaceutical treatment indexed in PubMed.
Methods
We planned a three-arm parallel-group randomized trial. French general practitioners (GPs) were invited to participate and were blinded to the study’s aim. We used a representative sample of 75 abstracts of pharmaceutical industry-funded randomized controlled trials published in 2010 and indexed in PubMed. Each abstract was standardized and reported in three formats: 1) no mention of the funding source or CoI; 2) reporting the funding source only; and 3) reporting the funding source and CoI. GPs were randomized according to a computerized randomization on a secure Internet system at a 1:1:1 ratio to assess one abstract among the three formats. The primary outcome was GPs’ confidence in the abstract conclusions (0, not at all, to 10, completely confident). The study was planned to detect a large difference with an effect size of 0.5.
Results
Between October 2012 and June 2013, among 605 GPs contacted, 354 were randomized, 118 for each type of abstract. The mean difference (95% confidence interval) in GPs’ confidence in abstract findings was 0.2 (-0.6; 1.0) (P = 0.84) for abstracts reporting the funding source only versus no funding source or CoI; -0.4 (-1.3; 0.4) (P = 0.39) for abstracts reporting the funding source and CoI versus no funding source and CoI; and -0.6 (-1.5; 0.2) (P = 0.15) for abstracts reporting the funding source and CoI versus the funding source only.
Conclusions
We found no evidence of a large impact of trial report abstracts mentioning funding sources or CoI on GPs’ confidence in the conclusions of the abstracts.
Trial Registration
ClinicalTrials.gov identifier: NCT01679873
doi:10.1186/1741-7015-12-69
PMCID: PMC4022327  PMID: 24779384
Funding; Conflict of interest; General Practitioner; Abstract; Reporting
15.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
16.  Performance of Rapid Diagnostic Tests for Imported Malaria in Clinical Practice: Results of a National Multicenter Study 
PLoS ONE  2013;8(9):e75486.
We compared the performance of four rapid diagnostic tests (RDTs) for imported malaria, and particularly Plasmodium falciparum infection, using thick and thin blood smears as the gold standard. All the tests are designed to detect at least one protein specific to P. falciparum (Plasmodium histidine-rich protein 2 (PfHRP2) or Plasmodium LDH (PfLDH)) and one pan-Plasmodium protein (aldolase or Plasmodium LDH (pLDH)). 1,311 consecutive patients presenting to 9 French hospitals with suspected malaria were included in this prospective study between April 2006 and September 2008. Blood smears revealed malaria parasites in 374 cases (29%). For the diagnosis of P. falciparum infection, the three tests detecting PfHRP2 showed high and similar sensitivity (96%), positive predictive value (PPV) (90%) and negative predictive value (NPV) (98%). The PfLDH test showed lower sensitivity (83%) and NPV (80%), despite good PPV (98%). For the diagnosis of non-falciparum species, the PPV and NPV of tests targeting pLDH or aldolase were 94–99% and 52–64%, respectively. PfHRP2-based RDTs are thus an acceptable alternative to routine microscopy for diagnosing P. falciparum malaria. However, as malaria may be misdiagnosed with RDTs, all negative results must be confirmed by the reference diagnostic method when clinical, biological or other factors are highly suggestive of malaria.
doi:10.1371/journal.pone.0075486
PMCID: PMC3787089  PMID: 24098699
18.  Incorporation of assessments of risk of bias of primary studies in systematic reviews of randomised trials: a cross-sectional study 
BMJ Open  2013;3(8):e003342.
Objective
We examined how assessments of risk of bias of primary studies are carried out and incorporated into the statistical analysis and overall findings of a systematic review.
Design
A cross-sectional review.
Sample
We assessed 200 systematic reviews of randomised trials published between January and March 2012; Cochrane (n=100), non-Cochrane (Database of Reviews of Effects) (n=100).
Main outcomes
Our primary outcome was a descriptive analysis of how assessments of risk of bias are carried out, the methods used, and the extent to which such assessments were incorporated into the statistical analysis and overall review findings.
Results
While Cochrane reviews routinely reported the method of risk of bias assessment and presented their results either in text or table format, 20% of non-Cochrane reviews failed to report the method used and 39% did not present the assessment results. Where it was possible to evaluate the individual results of the risk of bias assessment (n=154), 75% (n=116/154) of reviews had ≥1 trial at high risk of bias; the median proportion of trials per review at high risk of bias was 50% (IQR 31% to 89%). Despite this, only 56% (n=65/116) incorporated the risk of bias assessment into the interpretation of the results in the abstract and 41% (n=47/116) (49%; n=40/81 Cochrane and 20%; n=7/35 non-Cochrane) incorporated the risk of bias assessment into the interpretation of the conclusions. Of the 83% (n=166/200) systematic reviews which included a meta-analysis, only 11% (n=19/166) incorporated the risk of bias assessment into the statistical analysis.
Conclusions
Cochrane reviews were more likely than non-Cochrane reviews to report how risk of bias assessments of primary studies were carried out; however, both frequently failed to take such assessments into account in the statistical analysis and conclusions of the systematic review.
doi:10.1136/bmjopen-2013-003342
PMCID: PMC3753473  PMID: 23975265
Statistics & Research Methods; Epidemiology; General Medicine (see Internal Medicine)
19.  The Scleroderma Patient-centered Intervention Network (SPIN) Cohort: protocol for a cohort multiple randomised controlled trial (cmRCT) design to support trials of psychosocial and rehabilitation interventions in a rare disease context 
BMJ Open  2013;3(8):e003563.
Introduction
Psychosocial and rehabilitation interventions are increasingly used to attenuate disability and improve health-related quality of life (HRQL) in chronic diseases, but are typically not available for patients with rare diseases. Conducting rigorous, adequately powered trials of these interventions for patients with rare diseases is difficult. The Scleroderma Patient-centered Intervention Network (SPIN) is an international collaboration of patient organisations, clinicians and researchers. The aim of SPIN is to develop a research infrastructure to test accessible, low-cost self-guided online interventions to reduce disability and improve HRQL for people living with the rare disease systemic sclerosis (SSc or scleroderma). Once tested, effective interventions will be made accessible through patient organisations partnering with SPIN.
Methods and analysis
SPIN will employ the cohort multiple randomised controlled trial (cmRCT) design, in which patients consent to participate in a cohort for ongoing data collection. The aim is to recruit 1500–2000 patients from centres across the world within a period of 5 years (2013–2018). Eligible participants are persons ≥18 years of age with a diagnosis of SSc. In addition to baseline medical data, participants will complete patient-reported outcome measures every 3 months. Upon enrolment in the cohort, patients will consent to be contacted in the future to participate in intervention research and to allow their data to be used for comparison purposes for interventions tested with other cohort participants. Once interventions are developed, patients from the cohort will be randomly selected and offered interventions as part of pragmatic RCTs. Outcomes from patients offered interventions will be compared with outcomes from trial-eligible patients who are not offered the interventions.
Ethics and dissemination
The use of the cmRCT design, the development of self-guided online interventions and partnerships with patient organisations will allow SPIN to develop, rigourously test and effectively disseminate psychosocial and rehabilitation interventions for people with SSc.
doi:10.1136/bmjopen-2013-003563
PMCID: PMC3740254  PMID: 23929922
Rheumatology; Statistics & Research Methods; Rehabilitation Medicine; Mental Health
20.  Comparison of Treatment Effect Estimates for Pharmacological Randomized Controlled Trials Enrolling Older Adults Only and Those including Adults: A Meta-Epidemiological Study 
PLoS ONE  2013;8(5):e63677.
Context
Older adults are underrepresented in clinical research. To assess therapeutic efficacy in older patients, some randomized controlled trials (RCTs) include older adults only.
Objective
To compare treatment effects between RCTs including older adults only (elderly RCTs) and RCTs including all adults (adult RCTs) by a meta-epidemiological approach.
Methods
All systematic reviews published in the Cochrane Library (Issue 4, 2011) were screened. Eligible studies were meta-analyses of binary outcomes of pharmacologic treatment including at least one elderly RCT and at least one adult RCT. For each meta-analysis, we compared summary odds ratios for elderly RCTs and adult RCTs by calculating a ratio of odds ratios (ROR). A summary ROR was estimated across all meta-analyses.
Results
We selected 55 meta-analyses including 524 RCTs (17% elderly RCTs). The treatment effects differed beyond that expected by chance for 7 (13%) meta-analyses, showing more favourable treatment effects in elderly RCTs in 5 cases and in adult RCTs in 2 cases. The summary ROR was 0.91 (95% CI, 0.77–1.08, p = 0.28), with substantial heterogeneity (I2 = 51% and τ2 = 0.14). Sensitivity and subgroup analyses by type-of-age RCT (elderly RCTs vs RCTs excluding older adults and vs RCTs of mixed-age adults), type of outcome (mortality or other) and type of comparator (placebo or active drug) yielded similar results.
Conclusions
The efficacy of pharmacologic treatments did not significantly differ, on average, between RCTs including older adults only and RCTs of all adults. However, clinically important discrepancies may occur and should be considered when generalizing evidence from all adults to older adults.
doi:10.1371/journal.pone.0063677
PMCID: PMC3665786  PMID: 23723992
21.  Reporting of analyses from randomized controlled trials with multiple arms: a systematic review 
BMC Medicine  2013;11:84.
Background
Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms.
Methods
The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form.
Results
In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned.
Conclusions
Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.
doi:10.1186/1741-7015-11-84
PMCID: PMC3621416  PMID: 23531230
Systematic review; Randomized controlled trials; Multiple arms; Reporting of analyses
22.  Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors 
Background:
Clinical trials are commonly done without blinded outcome assessors despite the risk of bias. We wanted to evaluate the effect of nonblinded outcome assessment on estimated effects in randomized clinical trials with outcomes that involved subjective measurement scales.
Methods:
We conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two investigators agreed on the inclusion of trials and the outcome scale. For each trial, we calculated the difference in effect size (i.e., standardized mean difference between nonblinded and blinded assessments). A difference in effect size of less than 0 suggested that nonblinded assessors generated more optimistic estimates of effect. We pooled the differences in effect size using inverse variance random-effects meta-analysis and used metaregression to identify potential reasons for variation.
Results:
We included 24 trials in our review. The main meta-analysis included 16 trials (involving 2854 patients) with subjective outcomes. The estimated treatment effect was more beneficial when based on nonblinded assessors (pooled difference in effect size −0.23 [95% confidence interval (CI) −0.40 to −0.06]). In relative terms, nonblinded assessors exaggerated the pooled effect size by 68% (95% CI 14% to 230%). Heterogeneity was moderate (I2 = 46%, p = 0.02) and unexplained by metaregression.
Interpretation:
We provide empirical evidence for observer bias in randomized clinical trials with subjective measurement scale outcomes. A failure to blind assessors of outcomes in such trials results in a high risk of substantial bias.
doi:10.1503/cmaj.120744
PMCID: PMC3589328  PMID: 23359047
23.  Development and Validation of a Questionnaire Assessing Fears and Beliefs of Patients with Knee Osteoarthritis: The Knee Osteoarthritis Fears and Beliefs Questionnaire (KOFBeQ) 
PLoS ONE  2013;8(1):e53886.
Objective
We aimed to develop a questionnaire assessing fears and beliefs of patients with knee OA.
Design
We sent a detailed document reporting on a qualitative analysis of interviews of patients with knee OA to experts, and a Delphi procedure was adopted for item generation. Then, 80 physicians recruited 566 patients with knee OA to test the provisional questionnaire. Items were reduced according to their metric properties and exploratory factor analysis. Reliability was tested by the Cronbach α coefficient. Construct validity was tested by divergent validity and confirmatory factor analysis. Test–retest reliability was assessed by the intra-class correlation coefficient (ICC) and the Bland and Altman technique.
Results
137 items were extracted from analysis of the interview data. Three Delphi rounds were needed to obtain consensus on a 25-item provisional questionnaire. The item-reduction process resulted in an 11-item questionnaire. Selected items represented fears and beliefs about daily living activities (3 items), fears and beliefs about physicians (4 items), fears and beliefs about the disease (2 items), and fears and beliefs about sports and leisure activities (2 items). The Cronbach α coefficient of global score was 0.85. We observed expected divergent validity. Confirmation factor analyses confirmed higher intra-factor than inter-factor correlations. Test–retest reliability was good, with an ICC of 0.81, and Bland and Altman analysis did not reveal a systematic trend.
Conclusions
We propose an 11-item questionnaire assessing patients' fears and beliefs concerning knee OA with good content and construct validity.
doi:10.1371/journal.pone.0053886
PMCID: PMC3549996  PMID: 23349757
24.  Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study 
PLoS Medicine  2012;9(9):e1001308.
A study conducted by Amélie Yavchitz and colleagues examines the factors associated with “spin” (specific reporting strategies, intentional or unintentional, that emphasize the beneficial effect of treatments) in press releases of clinical trials.
Background
Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.
Methods and Findings
We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.
“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.
Conclusion
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
Editors' Summary
Background
The mass media play an important role in disseminating the results of medical research. Every day, news items in newspapers and magazines and on the television, radio, and internet provide the general public with information about the latest clinical studies. Such news items are written by journalists and are often based on information in “press releases.” These short communications, which are posted on online databases such as EurekAlert! and sent directly to journalists, are prepared by researchers or more often by the drug companies, funding bodies, or institutions supporting the clinical research and are designed to attract favorable media attention to newly published research results. Press releases provide journalists with the information they need to develop and publish a news story, including a link to the peer-reviewed journal (a scholarly periodical containing articles that have been judged by independent experts) in which the research results appear.
Why Was This Study Done?
In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”—reporting that emphasizes the beneficial effects of the experimental (new) treatment. For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment. “Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”. In this study, the researchers evaluate the presence of “spin” in press releases and associated media coverage and analyze whether the interpretation of RCT results based on press releases and associated news items could lead to the misinterpretation of RCT results.
What Did the Researchers Do and Find?
The researchers identified 70 press releases indexed in EurekAlert! over a 4-month period that described two-arm, parallel-group RCTs. They used Lexis Nexis, a database of news reports from around the world, to identify associated news items for 41 of these press releases and then analyzed the press releases, news items, and abstracts of the scientific articles related to each press release for “spin”. Finally, they interpreted the results of the RCTs using each source of information independently. Nearly half the press releases and article abstract conclusions contained “spin” and, importantly, “spin” in the press releases was associated with “spin” in the article abstracts. The researchers overestimated the benefits of the experimental treatment from the press release as compared to the full-text peer-reviewed article for 27% of reports. Factors that were associated with this overestimation of treatment benefits included publication in a specialized journal and having “spin” in the press release. Of the news items related to press releases, half contained “spin”, usually of the same type as identified in the press release and article abstract. Finally, the researchers overestimated the benefit of the experimental treatment from the news item as compared to the full-text peer-reviewed article in 24% of cases.
What Do These Findings Mean?
These findings show that “spin” in press releases and news reports is related to the presence of “spin” in the abstract of peer-reviewed reports of RCTs and suggest that the interpretation of RCT results based solely on press releases or media coverage could distort the interpretation of research findings in a way that favors experimental treatments. This interpretation shift is probably related to the presence of “spin” in peer-reviewed article abstracts, press releases, and news items and may be partly responsible for a mismatch between the perceived and real beneficial effects of new treatments among the general public. Overall, these findings highlight the important role that journal reviewers and editors play in disseminating research findings. These individuals, the researchers conclude, have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001308.
The PLOS Hub for Clinical Trials, which collects PLOS journals relating to clinical trials, includes some other articles on “spin” in clinical trial reports
EurekAlert is an online free database for science press releases
The UK National Health Service Choices website includes Beyond the Headlines, a resource that provides an unbiased and evidence-based analysis of health stories that make the news for both the public and health professionals
The US-based organization HealthNewsReview, a project supported by the Foundation for Informed Medical Decision Making, also provides expert reviews of news stories
doi:10.1371/journal.pmed.1001308
PMCID: PMC3439420  PMID: 22984354
25.  Outcomes in Registered, Ongoing Randomized Controlled Trials of Patient Education 
PLoS ONE  2012;7(8):e42934.
Background
With the increasing prevalence of chronic noncommunicable diseases, patient education is becoming important to strengthen disease prevention and control. We aimed to systematically determine the extent to which registered, ongoing randomized controlled trials (RCTs) evaluated an educational intervention focus on patient-important outcomes (i.e., outcomes measuring patient health status and quality of life).
Methods
On May 6, 2009, we searched for all ongoing RCTs registered in the World Health Organization International Clinical Trials Registry platform. We used a standardized data extraction form to collect data and determined whether the outcomes assessed were 1) patient-important outcomes such as clinical events, functional status, pain, or quality of life or 2) surrogate outcomes, such as biological outcome, treatment adherence, or patient knowledge.
Principal Findings
We selected 268 of the 642 potentially eligible studies and assessed a random sample of 150. Patient-important outcomes represented 54% (178 of 333) of all primary outcomes and 46% (286 of 623) of all secondary outcomes. Overall, 69% of trials (104 of 150) used at least one patient-important outcome as a primary outcome and 66% (99 of 150) as a secondary outcome. Finally, for 31% of trials (46 of 150), primary outcomes were only surrogate outcomes. The results varied by medical area. In neuropsychiatric disorders, patient important outcomes represented 84% (51 of 61) of primary outcomes, as compared with 54% (32 of 59) in malignant neoplasm and 18% (4 of 22) in diabetes mellitus trials.
In addition, only 35% assessed the long-term impact of interventions (i.e., >6 months).
Conclusions
There is a need to improve the relevance of outcomes and to assess the long term impact of educational interventions in RCTs.
doi:10.1371/journal.pone.0042934
PMCID: PMC3420885  PMID: 22916183

Results 1-25 (48)