Search tips
Search criteria

Results 1-25 (43)

Clipboard (0)

Select a Filter Below

more »
Year of Publication
more »
1.  Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial 
BMC Medicine  2015;13:221.
Incomplete reporting is a frequent waste in research. Our aim was to evaluate the impact of a writing aid tool (WAT) based on the CONSORT statement and its extension for non-pharmacologic treatments on the completeness of reporting of randomized controlled trials (RCTs).
We performed a ‘split-manuscript’ RCT with blinded outcome assessment. Participants were masters and doctoral students in public health. They were asked to write, over a 4-hour period, the methods section of a manuscript based on a real RCT protocol, with a different protocol provided to each participant. Methods sections were divided into six different domains: ‘trial design’, ‘randomization’, ‘blinding’, ‘participants’, ‘interventions’, and ‘outcomes’. Participants had to draft all six domains with access to the WAT for a random three of six domains. The random sequence was computer-generated and concealed. For each domain, the WAT comprised reminders of the corresponding CONSORT item(s), bullet points detailing all the key elements to be reported, and examples of good reporting. The control intervention consisted of no reminders. The primary outcome was the mean global score for completeness of reporting (scale 0–10) for all domains written with or without the WAT.
Forty-one participants wrote 41 different manuscripts of RCT methods sections, corresponding to 246 domains (six for each of the 41 protocols). All domains were analyzed. For the primary outcome, the mean (SD) global score for completeness of reporting was higher with than without use of the WAT: 7.1 (1.2) versus 5.0 (1.6), with a mean (95 % CI) difference 2.1 (1.5–2.7; P <0.01). Completeness of reporting was significantly higher with the WAT for all domains except for blinding and outcomes.
Use of the WAT could improve the completeness of manuscripts reporting the results of RCTs.
Trial registration (http://clinicaltrials.govNCT02127567, registration date first received April 29, 2014)
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0460-y) contains supplementary material, which is available to authorized users.
PMCID: PMC4570037  PMID: 26370288
Clinical epidemiology; CONSORT statement; Randomized controlled trial; Reporting guidelines; Transparency
2.  Consensus on Severity for Ocular Emergency: The BAsic SEverity Score for Common OculaR Emergencies [BaSe SCOrE] 
Journal of Ophthalmology  2015;2015:576983.
Purpose. To weigh ocular emergency events according to their severity. Methods. A group of ophthalmologists and researchers rated the severity of 86 common ocular emergencies using a Delphi consensus method. The ratings were attributed on a 7-point scale throughout a first-round survey. Then, the experts were provided with the median and quartiles of the ratings of each item to reevaluate the severity levels being aware of the group's first-round responses. The final severity rating for each item corresponded to the median rating provided by the last Delphi round. Results. We invited 398 experts, and 80 (20%) of them, from 18 different countries, agreed to participate. A consensus was reached in the second round, completed by 24 experts (43%). The severity ranged from subconjunctival hemorrhages (median = 1, Q1 = 0; Q3 = 1) to penetrating eye injuries collapsing the eyeball with intraocular foreign body or panophthalmitis with infection following surgery (median = 5, Q1 = 5; Q3 = 6). The ratings did not differ according to the practice of the experts. Conclusion. These ratings could be used to assess the severity of ocular emergency events, to serve in composite algorithms for emergency triage and standardizing research in ocular emergencies.
PMCID: PMC4534620  PMID: 26294965
3.  The most important tasks for peer reviewers evaluating a randomized controlled trial are not congruent with the tasks most often requested by journal editors 
BMC Medicine  2015;13:158.
The peer review process is a cornerstone of biomedical research publications. However, it may fail to allow the publication of high-quality articles. We aimed to identify and sort, according to their importance, all tasks that are expected from peer reviewers when evaluating a manuscript reporting the results of a randomized controlled trial (RCT) and to determine which of these tasks are clearly requested by editors in their recommendations to peer reviewers.
We identified the tasks expected of peer reviewers from 1) a systematic review of the published literature and 2) recommendations to peer reviewers for 171 journals (i.e., 10 journals with the highest impact factor for 14 different medical areas and all journals indexed in PubMed that published more than 15 RCTs over 3 months regardless of the medical area). Participants who had peer-reviewed at least one report of an RCT had to classify the importance of each task relative to other tasks using a Q-sort technique. Finally, we evaluated editors’ recommendations to authors to determine which tasks were clearly requested by editors in their recommendations to peer reviewers.
The Q-sort survey was completed by 203 participants, 93 (46 %) with clinical expertise, 72 (36 %) with methodological/statistical expertise, 17 (8 %) with expertise in both areas, and 21 (10 %) with other expertise. The task rated most important by participants (evaluating the risk of bias) was clearly requested by only 5 % of editors. In contrast, the task most frequently requested by editors (provide recommendations for publication), was rated in the first tertile only by 21 % of all participants.
The most important tasks for peer reviewers were not congruent with the tasks most often requested by journal editors in their guidelines to reviewers.
Electronic supplementary material
The online version of this article (doi:10.1186/s12916-015-0395-3) contains supplementary material, which is available to authorized users.
PMCID: PMC4491236  PMID: 26141137
Peer review; Q-sort; Randomized controlled trials; Recommendations to reviewers
4.  A systematic review of the use of an expertise-based randomised controlled trial design 
Trials  2015;16:241.
Under a conventional two-arm randomised trial design, participants are allocated to an intervention and participating health professionals are expected to deliver both interventions. However, health professionals often have differing levels of expertise in a skill-based interventions such as surgery or psychotherapy. An expertise-based approach to trial design, where health professionals only deliver an intervention in which they have expertise, has been proposed as an alternative. The aim of this project was to systematically review the use of an expertise-based trial design in the medical literature.
We carried out a comprehensive search of nine databases—AMED, BIOSIS, CENTRAL, CINAHL, Cochrane Methodology Register, EMBASE, MEDLINE, Science Citation Index, and PsycINFO—from 1966 to 2012 and performed citation searches using the ISI Citation Indexes and Scopus. Studies that used an expertise-based trial design were included. Two review authors independently screened the titles and abstracts and assessed full-text reports. Data were extracted and summarised on the study characteristics, general and expertise-specific study methodology, and conduct.
In total, 7476 titles and abstracts were identified, leading to 43 included studies (54 articles). The vast majority (88 %) used a pure expertise-based design; three (7 %) adopted a hybrid design, and two (5 %) used a design that was unclear. Most studies compared substantially different interventions (79 %). In many cases, key information relating to the expertise-based design was absent; only 12 (28 %) reported criteria for delivering both interventions. Most studies recruited the target sample size or very close to it (median of 101, interquartile range of 94 to 118), although the target was reported for only 40 % of studies. The proportion of participants who received the allocated intervention was high (92 %, interquartile range of 82 to 99 %).
While use of an expertise-based trial design is growing, it remains uncommon. Reporting of study methodology and, particularly, expertise-related methodology was poor. Empirical evidence provided some support for purported benefits such as high levels of recruitment and compliance with allocation. An expertise-based trial design should be considered but its value seems context-specific, particularly when interventions differ substantially or interventions are typically delivered by different health professionals.
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-015-0739-5) contains supplementary material, which is available to authorized users.
PMCID: PMC4468810  PMID: 26025450
Expertise-based; Expertise; Systematic review; Learning; Randomised controlled trial; Trial design; Non-pharmacological interventions; Surgery
5.  Avoidable waste of research related to inadequate methods in clinical trials 
Objective To assess the waste of research related to inadequate methods in trials included in Cochrane reviews and to examine to what extent this waste could be avoided. A secondary objective was to perform a simulation study to re-estimate this avoidable waste if all trials were adequately reported.
Design Methodological review and simulation study.
Data sources Trials included in the meta-analysis of the primary outcome of Cochrane reviews published between April 2012 and March 2013.
Data extraction and synthesis We collected the risk of bias assessment made by the review authors for each trial. For a random sample of 200 trials with at least one domain at high risk of bias, we re-assessed risk of bias and identified all related methodological problems. For each problem, possible adjustments were proposed that were then validated by an expert panel also evaluating their feasibility (easy or not) and cost. Avoidable waste was defined as trials with at least one domain at high risk of bias for which easy adjustments with no or minor cost could change all domains to low risk. In the simulation study, after extrapolating our re-assessment of risk of bias to all trials, we considered each domain rated as unclear risk of bias as missing data and used multiple imputations to determine whether they were at high or low risk.
Results Of 1286 trials from 205 meta-analyses, 556 (43%) had at least one domain at high risk of bias. Among the sample of 200 of these trials, 142 were confirmed as high risk; in these, we identified 25 types of methodological problem. Adjustments were possible in 136 trials (96%). Easy adjustments with no or minor cost could be applied in 71 trials (50%), resulting in 17 trials (12%) changing to low risk for all domains. So the avoidable waste represented 12% (95% CI 7% to 18%) of trials with at least one domain at high risk. After correcting for incomplete reporting, avoidable waste due to inadequate methods was estimated at 42% (95% CI 36% to 49%).
Conclusions An important burden of wasted research is related to inadequate methods. This waste could be partly avoided by simple and inexpensive adjustments.
PMCID: PMC4372296  PMID: 25804210
6.  Impact of adding a limitations section to abstracts of systematic reviews on readers’ interpretation: a randomized controlled trial 
To allow an accurate evaluation of abstracts of systematic reviews, the PRISMA Statement recommends that the limitations of the evidence (e.g., risk of bias, publication bias, inconsistency, imprecision) should be described in the abstract. We aimed to evaluate the impact of adding such limitations sections on reader’s interpretation.
We performed a two-arm parallel group randomized controlled trial (RCT) using a sample of 30 abstracts of systematic reviews evaluating the effects of healthcare intervention with conclusions favoring the beneficial effect of the experimental treatments. Two formats of these abstracts were derived: one reported without and one with a standardized limitations section written according to the PRISMA statement for abstracts. The primary outcome was readers’ confidence in the results of the systematic review as stated in the abstract assessed by a Likert scale from 0, not at all confident, to 10, very confident. In total, 300 participants (corresponding authors of RCT reports indexed in PubMed) were randomized by a web-based randomization procedure to interpret one abstract with a limitations section (n = 150) or without a limitations section (n = 150). Participants were blinded to the study hypothesis.
Adding a limitations section did not modify readers’ interpretation of findings in terms of confidence in the results (mean difference [95% confidence interval] 0.19 [−0.37–0.74], p = 0.50), confidence in the validity of the conclusions (0.07 [−0.49–0.62], p = 0.80), or benefit of the experimental intervention (0.12 [−0.42–0.44], p = 0.65).
This study is limited because the participants were expert-readers and are not representative of all systematic review readers.
Adding a limitations section to abstracts of systematic reviews did not affect readers’ interpretation of the abstract results. Other studies are needed to confirm the results and explore the impact of a limitations section on a less expert panel of participants.
Trial registration (NCT01848782).
Electronic supplementary material
The online version of this article (doi:10.1186/1471-2288-14-123) contains supplementary material, which is available to authorized users.
PMCID: PMC4247631  PMID: 25420433
Meta-analysis; Systematic review; Bias; Limits; Limitation; Interpretation; Interpretation bias; Misinterpretation; Abstract; Results
7.  Assessing bias in osteoarthritis trials included in Cochrane reviews: protocol for a meta-epidemiological study 
BMJ Open  2014;4(10):e005491.
The validity of systematic reviews and meta-analysis depends on methodological quality and unbiased dissemination of trials. Our objective is to evaluate the association of estimates of treatment effects with different bias-related study characteristics in meta-analyses of interventions used for treating pain in osteoarthritis (OA). From the findings, we hope to consolidate guidance on interpreting OA trials in systematic reviews based on empirical evidence from Cochrane reviews.
Methods and analysis
Only systematic reviews that compare experimental interventions with sham, placebo or no intervention control will be considered eligible. Bias will be assessed with the risk of bias tool, used according to the Cochrane Collaboration’s recommendations. Furthermore, center status, trial size and funding will be assessed. The primary outcome (pain) will be abstracted from the first appearing forest plot for overall pain in the Cochrane review. Treatment effect sizes will be expressed as standardised mean differences (SMDs), where the difference in mean values available from the forest plots is divided by the pooled SD. To empirically assess the risk of bias in treatment benefits, we will perform stratified analyses of the trials from the included meta-analyses and assess the interaction between trial characteristics and treatment effect. A relevant study-level covariate is defined as one that decreases the between-study variance (τ2, estimated as Tau-squared) as a consequence of inclusion in the mixed effects statistical model.
Ethics and dissemination
Meta-analyses and randomised controlled trials provide the most reliable basis for treatment of patients with OA, but the actual impact of bias is unclear. This study will systematically examine the methodological quality in OA Cochrane reviews and explore the effect estimates behind possible bias. Since our study does not collect primary data, no formal ethical assessment and informed consent are required.
Trial registration number
PROSPERO (CRD42013006924).
PMCID: PMC4187994  PMID: 25280805
osteoarthritis; meta-analysis; meta-epidemiology; risk of bias
8.  Impact of Osteopathic Treatment on Pain in Adult Patients with Cystic Fibrosis – A Pilot Randomized Controlled Study 
PLoS ONE  2014;9(7):e102465.
Pain is a common complication in patients with cystic fibrosis (CF) and is associated with shorter survival. We evaluated the impact of osteopathic manipulative treatment (OMT) on pain in adults with CF.
A pilot multicenter randomized controlled trial was conducted with three parallel arms: OMT (group A, 16 patients), sham OMT (sham treatment, group B, 8 patients) and no treatment (group C, 8 patients). Medical investigators and patients were double-blind to treatment for groups A and B, who received OMT or sham OMT monthly for 6 months. Pain was rated as a composite of its intensity and duration over the previous month. The evolution of chest/back pain after 6 months was compared between group A and groups B+C combined (control group). The evolution of cervical pain, headache and quality of life (QOL) were similarly evaluated.
There was no statistically significant difference between the treatment and control groups in the decrease of chest/back pain (difference = −2.20 IC95% [−4.81; 0.42], p = 0.098); also, group A did not differ from group B. However, chest/back pain decreased more in groups A (p = 0.002) and B (p = 0.006) than in group C. Cervical pain, headache and QOL scores did not differ between the treatment and control groups.
This pilot study demonstrated the feasibility of evaluating the efficacy of OMT to treat the pain of patients with CF. The lack of difference between the group treated with OMT and the control group may be due to the small number of patients included in this trial, which also precludes any definitive conclusion about the greater decrease of pain in patients receiving OMT or sham OMT than in those with no intervention.
Trial Registration NCT01293019
PMCID: PMC4100932  PMID: 25029347
9.  Reporting funding source or conflict of interest in abstracts of randomized controlled trials, no evidence of a large impact on general practitioners’ confidence in conclusions, a three-arm randomized controlled trial 
BMC Medicine  2014;12:69.
Systematic reporting of funding sources is recommended in the CONSORT Statement for abstracts. However, no specific recommendation is related to the reporting of conflicts of interest (CoI). The objective was to compare physicians’ confidence in the conclusions of abstracts of randomized controlled trials of pharmaceutical treatment indexed in PubMed.
We planned a three-arm parallel-group randomized trial. French general practitioners (GPs) were invited to participate and were blinded to the study’s aim. We used a representative sample of 75 abstracts of pharmaceutical industry-funded randomized controlled trials published in 2010 and indexed in PubMed. Each abstract was standardized and reported in three formats: 1) no mention of the funding source or CoI; 2) reporting the funding source only; and 3) reporting the funding source and CoI. GPs were randomized according to a computerized randomization on a secure Internet system at a 1:1:1 ratio to assess one abstract among the three formats. The primary outcome was GPs’ confidence in the abstract conclusions (0, not at all, to 10, completely confident). The study was planned to detect a large difference with an effect size of 0.5.
Between October 2012 and June 2013, among 605 GPs contacted, 354 were randomized, 118 for each type of abstract. The mean difference (95% confidence interval) in GPs’ confidence in abstract findings was 0.2 (-0.6; 1.0) (P = 0.84) for abstracts reporting the funding source only versus no funding source or CoI; -0.4 (-1.3; 0.4) (P = 0.39) for abstracts reporting the funding source and CoI versus no funding source and CoI; and -0.6 (-1.5; 0.2) (P = 0.15) for abstracts reporting the funding source and CoI versus the funding source only.
We found no evidence of a large impact of trial report abstracts mentioning funding sources or CoI on GPs’ confidence in the conclusions of the abstracts.
Trial Registration identifier: NCT01679873
PMCID: PMC4022327  PMID: 24779384
Funding; Conflict of interest; General Practitioner; Abstract; Reporting
10.  Timing and Completeness of Trial Results Posted at and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at and published in journals.
Methods and Findings
We searched on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Our results highlight the need to search for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at within one year of trial completion.—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
PMCID: PMC3849189  PMID: 24311990
11.  Performance of Rapid Diagnostic Tests for Imported Malaria in Clinical Practice: Results of a National Multicenter Study 
PLoS ONE  2013;8(9):e75486.
We compared the performance of four rapid diagnostic tests (RDTs) for imported malaria, and particularly Plasmodium falciparum infection, using thick and thin blood smears as the gold standard. All the tests are designed to detect at least one protein specific to P. falciparum (Plasmodium histidine-rich protein 2 (PfHRP2) or Plasmodium LDH (PfLDH)) and one pan-Plasmodium protein (aldolase or Plasmodium LDH (pLDH)). 1,311 consecutive patients presenting to 9 French hospitals with suspected malaria were included in this prospective study between April 2006 and September 2008. Blood smears revealed malaria parasites in 374 cases (29%). For the diagnosis of P. falciparum infection, the three tests detecting PfHRP2 showed high and similar sensitivity (96%), positive predictive value (PPV) (90%) and negative predictive value (NPV) (98%). The PfLDH test showed lower sensitivity (83%) and NPV (80%), despite good PPV (98%). For the diagnosis of non-falciparum species, the PPV and NPV of tests targeting pLDH or aldolase were 94–99% and 52–64%, respectively. PfHRP2-based RDTs are thus an acceptable alternative to routine microscopy for diagnosing P. falciparum malaria. However, as malaria may be misdiagnosed with RDTs, all negative results must be confirmed by the reference diagnostic method when clinical, biological or other factors are highly suggestive of malaria.
PMCID: PMC3787089  PMID: 24098699
13.  Incorporation of assessments of risk of bias of primary studies in systematic reviews of randomised trials: a cross-sectional study 
BMJ Open  2013;3(8):e003342.
We examined how assessments of risk of bias of primary studies are carried out and incorporated into the statistical analysis and overall findings of a systematic review.
A cross-sectional review.
We assessed 200 systematic reviews of randomised trials published between January and March 2012; Cochrane (n=100), non-Cochrane (Database of Reviews of Effects) (n=100).
Main outcomes
Our primary outcome was a descriptive analysis of how assessments of risk of bias are carried out, the methods used, and the extent to which such assessments were incorporated into the statistical analysis and overall review findings.
While Cochrane reviews routinely reported the method of risk of bias assessment and presented their results either in text or table format, 20% of non-Cochrane reviews failed to report the method used and 39% did not present the assessment results. Where it was possible to evaluate the individual results of the risk of bias assessment (n=154), 75% (n=116/154) of reviews had ≥1 trial at high risk of bias; the median proportion of trials per review at high risk of bias was 50% (IQR 31% to 89%). Despite this, only 56% (n=65/116) incorporated the risk of bias assessment into the interpretation of the results in the abstract and 41% (n=47/116) (49%; n=40/81 Cochrane and 20%; n=7/35 non-Cochrane) incorporated the risk of bias assessment into the interpretation of the conclusions. Of the 83% (n=166/200) systematic reviews which included a meta-analysis, only 11% (n=19/166) incorporated the risk of bias assessment into the statistical analysis.
Cochrane reviews were more likely than non-Cochrane reviews to report how risk of bias assessments of primary studies were carried out; however, both frequently failed to take such assessments into account in the statistical analysis and conclusions of the systematic review.
PMCID: PMC3753473  PMID: 23975265
Statistics & Research Methods; Epidemiology; General Medicine (see Internal Medicine)
14.  The Scleroderma Patient-centered Intervention Network (SPIN) Cohort: protocol for a cohort multiple randomised controlled trial (cmRCT) design to support trials of psychosocial and rehabilitation interventions in a rare disease context 
BMJ Open  2013;3(8):e003563.
Psychosocial and rehabilitation interventions are increasingly used to attenuate disability and improve health-related quality of life (HRQL) in chronic diseases, but are typically not available for patients with rare diseases. Conducting rigorous, adequately powered trials of these interventions for patients with rare diseases is difficult. The Scleroderma Patient-centered Intervention Network (SPIN) is an international collaboration of patient organisations, clinicians and researchers. The aim of SPIN is to develop a research infrastructure to test accessible, low-cost self-guided online interventions to reduce disability and improve HRQL for people living with the rare disease systemic sclerosis (SSc or scleroderma). Once tested, effective interventions will be made accessible through patient organisations partnering with SPIN.
Methods and analysis
SPIN will employ the cohort multiple randomised controlled trial (cmRCT) design, in which patients consent to participate in a cohort for ongoing data collection. The aim is to recruit 1500–2000 patients from centres across the world within a period of 5 years (2013–2018). Eligible participants are persons ≥18 years of age with a diagnosis of SSc. In addition to baseline medical data, participants will complete patient-reported outcome measures every 3 months. Upon enrolment in the cohort, patients will consent to be contacted in the future to participate in intervention research and to allow their data to be used for comparison purposes for interventions tested with other cohort participants. Once interventions are developed, patients from the cohort will be randomly selected and offered interventions as part of pragmatic RCTs. Outcomes from patients offered interventions will be compared with outcomes from trial-eligible patients who are not offered the interventions.
Ethics and dissemination
The use of the cmRCT design, the development of self-guided online interventions and partnerships with patient organisations will allow SPIN to develop, rigourously test and effectively disseminate psychosocial and rehabilitation interventions for people with SSc.
PMCID: PMC3740254  PMID: 23929922
Rheumatology; Statistics & Research Methods; Rehabilitation Medicine; Mental Health
15.  Comparison of Treatment Effect Estimates for Pharmacological Randomized Controlled Trials Enrolling Older Adults Only and Those including Adults: A Meta-Epidemiological Study 
PLoS ONE  2013;8(5):e63677.
Older adults are underrepresented in clinical research. To assess therapeutic efficacy in older patients, some randomized controlled trials (RCTs) include older adults only.
To compare treatment effects between RCTs including older adults only (elderly RCTs) and RCTs including all adults (adult RCTs) by a meta-epidemiological approach.
All systematic reviews published in the Cochrane Library (Issue 4, 2011) were screened. Eligible studies were meta-analyses of binary outcomes of pharmacologic treatment including at least one elderly RCT and at least one adult RCT. For each meta-analysis, we compared summary odds ratios for elderly RCTs and adult RCTs by calculating a ratio of odds ratios (ROR). A summary ROR was estimated across all meta-analyses.
We selected 55 meta-analyses including 524 RCTs (17% elderly RCTs). The treatment effects differed beyond that expected by chance for 7 (13%) meta-analyses, showing more favourable treatment effects in elderly RCTs in 5 cases and in adult RCTs in 2 cases. The summary ROR was 0.91 (95% CI, 0.77–1.08, p = 0.28), with substantial heterogeneity (I2 = 51% and τ2 = 0.14). Sensitivity and subgroup analyses by type-of-age RCT (elderly RCTs vs RCTs excluding older adults and vs RCTs of mixed-age adults), type of outcome (mortality or other) and type of comparator (placebo or active drug) yielded similar results.
The efficacy of pharmacologic treatments did not significantly differ, on average, between RCTs including older adults only and RCTs of all adults. However, clinically important discrepancies may occur and should be considered when generalizing evidence from all adults to older adults.
PMCID: PMC3665786  PMID: 23723992
16.  Reporting of analyses from randomized controlled trials with multiple arms: a systematic review 
BMC Medicine  2013;11:84.
Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms.
The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form.
In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned.
Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.
PMCID: PMC3621416  PMID: 23531230
Systematic review; Randomized controlled trials; Multiple arms; Reporting of analyses
17.  Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors 
Clinical trials are commonly done without blinded outcome assessors despite the risk of bias. We wanted to evaluate the effect of nonblinded outcome assessment on estimated effects in randomized clinical trials with outcomes that involved subjective measurement scales.
We conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two investigators agreed on the inclusion of trials and the outcome scale. For each trial, we calculated the difference in effect size (i.e., standardized mean difference between nonblinded and blinded assessments). A difference in effect size of less than 0 suggested that nonblinded assessors generated more optimistic estimates of effect. We pooled the differences in effect size using inverse variance random-effects meta-analysis and used metaregression to identify potential reasons for variation.
We included 24 trials in our review. The main meta-analysis included 16 trials (involving 2854 patients) with subjective outcomes. The estimated treatment effect was more beneficial when based on nonblinded assessors (pooled difference in effect size −0.23 [95% confidence interval (CI) −0.40 to −0.06]). In relative terms, nonblinded assessors exaggerated the pooled effect size by 68% (95% CI 14% to 230%). Heterogeneity was moderate (I2 = 46%, p = 0.02) and unexplained by metaregression.
We provide empirical evidence for observer bias in randomized clinical trials with subjective measurement scale outcomes. A failure to blind assessors of outcomes in such trials results in a high risk of substantial bias.
PMCID: PMC3589328  PMID: 23359047
18.  Development and Validation of a Questionnaire Assessing Fears and Beliefs of Patients with Knee Osteoarthritis: The Knee Osteoarthritis Fears and Beliefs Questionnaire (KOFBeQ) 
PLoS ONE  2013;8(1):e53886.
We aimed to develop a questionnaire assessing fears and beliefs of patients with knee OA.
We sent a detailed document reporting on a qualitative analysis of interviews of patients with knee OA to experts, and a Delphi procedure was adopted for item generation. Then, 80 physicians recruited 566 patients with knee OA to test the provisional questionnaire. Items were reduced according to their metric properties and exploratory factor analysis. Reliability was tested by the Cronbach α coefficient. Construct validity was tested by divergent validity and confirmatory factor analysis. Test–retest reliability was assessed by the intra-class correlation coefficient (ICC) and the Bland and Altman technique.
137 items were extracted from analysis of the interview data. Three Delphi rounds were needed to obtain consensus on a 25-item provisional questionnaire. The item-reduction process resulted in an 11-item questionnaire. Selected items represented fears and beliefs about daily living activities (3 items), fears and beliefs about physicians (4 items), fears and beliefs about the disease (2 items), and fears and beliefs about sports and leisure activities (2 items). The Cronbach α coefficient of global score was 0.85. We observed expected divergent validity. Confirmation factor analyses confirmed higher intra-factor than inter-factor correlations. Test–retest reliability was good, with an ICC of 0.81, and Bland and Altman analysis did not reveal a systematic trend.
We propose an 11-item questionnaire assessing patients' fears and beliefs concerning knee OA with good content and construct validity.
PMCID: PMC3549996  PMID: 23349757
19.  Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study 
PLoS Medicine  2012;9(9):e1001308.
A study conducted by Amélie Yavchitz and colleagues examines the factors associated with “spin” (specific reporting strategies, intentional or unintentional, that emphasize the beneficial effect of treatments) in press releases of clinical trials.
Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.
Methods and Findings
We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.
“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
Editors' Summary
The mass media play an important role in disseminating the results of medical research. Every day, news items in newspapers and magazines and on the television, radio, and internet provide the general public with information about the latest clinical studies. Such news items are written by journalists and are often based on information in “press releases.” These short communications, which are posted on online databases such as EurekAlert! and sent directly to journalists, are prepared by researchers or more often by the drug companies, funding bodies, or institutions supporting the clinical research and are designed to attract favorable media attention to newly published research results. Press releases provide journalists with the information they need to develop and publish a news story, including a link to the peer-reviewed journal (a scholarly periodical containing articles that have been judged by independent experts) in which the research results appear.
Why Was This Study Done?
In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”—reporting that emphasizes the beneficial effects of the experimental (new) treatment. For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment. “Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”. In this study, the researchers evaluate the presence of “spin” in press releases and associated media coverage and analyze whether the interpretation of RCT results based on press releases and associated news items could lead to the misinterpretation of RCT results.
What Did the Researchers Do and Find?
The researchers identified 70 press releases indexed in EurekAlert! over a 4-month period that described two-arm, parallel-group RCTs. They used Lexis Nexis, a database of news reports from around the world, to identify associated news items for 41 of these press releases and then analyzed the press releases, news items, and abstracts of the scientific articles related to each press release for “spin”. Finally, they interpreted the results of the RCTs using each source of information independently. Nearly half the press releases and article abstract conclusions contained “spin” and, importantly, “spin” in the press releases was associated with “spin” in the article abstracts. The researchers overestimated the benefits of the experimental treatment from the press release as compared to the full-text peer-reviewed article for 27% of reports. Factors that were associated with this overestimation of treatment benefits included publication in a specialized journal and having “spin” in the press release. Of the news items related to press releases, half contained “spin”, usually of the same type as identified in the press release and article abstract. Finally, the researchers overestimated the benefit of the experimental treatment from the news item as compared to the full-text peer-reviewed article in 24% of cases.
What Do These Findings Mean?
These findings show that “spin” in press releases and news reports is related to the presence of “spin” in the abstract of peer-reviewed reports of RCTs and suggest that the interpretation of RCT results based solely on press releases or media coverage could distort the interpretation of research findings in a way that favors experimental treatments. This interpretation shift is probably related to the presence of “spin” in peer-reviewed article abstracts, press releases, and news items and may be partly responsible for a mismatch between the perceived and real beneficial effects of new treatments among the general public. Overall, these findings highlight the important role that journal reviewers and editors play in disseminating research findings. These individuals, the researchers conclude, have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.
Additional Information
Please access these Web sites via the online version of this summary at
The PLOS Hub for Clinical Trials, which collects PLOS journals relating to clinical trials, includes some other articles on “spin” in clinical trial reports
EurekAlert is an online free database for science press releases
The UK National Health Service Choices website includes Beyond the Headlines, a resource that provides an unbiased and evidence-based analysis of health stories that make the news for both the public and health professionals
The US-based organization HealthNewsReview, a project supported by the Foundation for Informed Medical Decision Making, also provides expert reviews of news stories
PMCID: PMC3439420  PMID: 22984354
20.  Outcomes in Registered, Ongoing Randomized Controlled Trials of Patient Education 
PLoS ONE  2012;7(8):e42934.
With the increasing prevalence of chronic noncommunicable diseases, patient education is becoming important to strengthen disease prevention and control. We aimed to systematically determine the extent to which registered, ongoing randomized controlled trials (RCTs) evaluated an educational intervention focus on patient-important outcomes (i.e., outcomes measuring patient health status and quality of life).
On May 6, 2009, we searched for all ongoing RCTs registered in the World Health Organization International Clinical Trials Registry platform. We used a standardized data extraction form to collect data and determined whether the outcomes assessed were 1) patient-important outcomes such as clinical events, functional status, pain, or quality of life or 2) surrogate outcomes, such as biological outcome, treatment adherence, or patient knowledge.
Principal Findings
We selected 268 of the 642 potentially eligible studies and assessed a random sample of 150. Patient-important outcomes represented 54% (178 of 333) of all primary outcomes and 46% (286 of 623) of all secondary outcomes. Overall, 69% of trials (104 of 150) used at least one patient-important outcome as a primary outcome and 66% (99 of 150) as a secondary outcome. Finally, for 31% of trials (46 of 150), primary outcomes were only surrogate outcomes. The results varied by medical area. In neuropsychiatric disorders, patient important outcomes represented 84% (51 of 61) of primary outcomes, as compared with 54% (32 of 59) in malignant neoplasm and 18% (4 of 22) in diabetes mellitus trials.
In addition, only 35% assessed the long-term impact of interventions (i.e., >6 months).
There is a need to improve the relevance of outcomes and to assess the long term impact of educational interventions in RCTs.
PMCID: PMC3420885  PMID: 22916183
21.  ASSIST Applicability Scoring of Surgical trials. An Investigator-reported aSsessment Tool 
PLoS ONE  2012;7(8):e42258.
We aimed to develop a new tool for assessing and depicting the applicability of the results of surgical randomized controlled trials (RCTs) from the trial investigators' perspective.
We identified all items related to applicability by a systematic methodological review, and then a sample of surgeons used these items in a web-based survey to evaluate the applicability of their own trial results. For each applicability item, participants had to indicate on a numerical scale that was simplified as a three-item scale: 1) items essential to consider, 2) items requiring attention, and 3) items inconsequential to the applicability of the results of their own RCT to clinical practice. For the final tool, we selected only items that were rated as being essential or requiring attention for at least 25% of the trials evaluated. We propose a specific process to construct the tool and to depict applicability in a graph. We identified all investigators of published and registered ongoing RCTs assessing surgery and invited them to participate in the web-based survey.
148 surgeons assessed applicability for their own trial and participated in the process of item selection. The final tool contains 22 items (4 dedicated to patients, 5 to centers, 5 to surgeons and 8 to the intervention). We proposed a straightforward process of constructing the graphical tool: 1) a multidisciplinary team of investigators or other care providers participating in the trial could independently assess each item, 2) a consensus method could be used, and 3) the investigators could depict their assessment of the applicability of the trial results in 4 graphs related to patients, centers, surgeons and the intervention.
This investigator-reported assessment tool could help readers define under what conditions they could reasonably apply the results of a surgical RCT to their clinical practice.
PMCID: PMC3419723  PMID: 22916125
22.  Inadequate description of educational interventions in ongoing randomized controlled trials 
Trials  2012;13:63.
The registration of clinical trials has been promoted to prevent publication bias and increase research transparency. Despite general agreement about the minimum amount of information needed for trial registration, we lack clear guidance on descriptions of non-pharmacologic interventions in trial registries. We aimed to evaluate the quality of registry descriptions of non-pharmacologic interventions assessed in ongoing randomized controlled trials (RCTs) of patient education.
On 6 May 2009, we searched for all ongoing RCTs registered in the 10 trial registries accessible through the World Health Organization International Clinical Trials Registry Platform. We included trials evaluating an educational intervention (that is, designed to teach or train patients about their own health) and dedicated to participants, their family members or home caregivers. We used a standardized data extraction form to collect data related to the description of the experimental intervention, the centers, and the caregivers.
We selected 268 of 642 potentially eligible studies and appraised a random sample of 150 records. All selected trials were registered in 4 registers, mainly (61%). The median [interquartile range] target sample size was 205 [100 to 400] patients. The comparator was mainly usual care (47%) or active treatment (47%). A minority of records (17%, 95% CI 11 to 23%) reported an overall adequate description of the intervention (that is, description that reported the content, mode of delivery, number, frequency, duration of sessions and overall duration of the intervention). Further, for most reports (59%), important information about the content of the intervention was missing. The description of the mode of delivery of the intervention was reported for 52% of studies, the number of sessions for 74%, the frequency of sessions for 58%, the duration of each session for 45% and the overall duration for 63%. Information about the caregivers was missing for 70% of trials. Most trials (73%) took place in the United States or United Kingdom, 64% involved only one centre, and participating centers were mainly tertiary-care, academic or university hospitals (51%).
Educational interventions assessed in ongoing RCTs of educational interventions are poorly described in trial registries. The lack of adequate description raises doubts about the ability of trial registration to help patients and researchers know about the treatment evaluated in trials of education.
PMCID: PMC3503701  PMID: 22607344
23.  Underrepresentation of Elderly People in Randomised Controlled Trials. The Example of Trials of 4 Widely Prescribed Drugs 
PLoS ONE  2012;7(3):e33559.
We aimed to determine the representation of elderly people in published reports of randomized controlled trials (RCTs). We focused on trials of 4 medications—pioglitazone, rosuvastatin, risedronate, and valsartan—frequently used by elderly patients with chronic medical conditions.
Methods and Findings
We selected all reports of RCTs indexed in PubMed from 1966 to April 2008 evaluating one of the 4 medications of interest. Estimates of the community-based “on-treatment” population were from a national health insurance database (SNIIR-AM) covering approximately 86% of the population in France. From this database, we evaluated data claims from January 2006 to December 2007 for 1,958,716 patients who received one of the medications of interest for more than 6 months. Of the 155 RCT reports selected, only 3 studies were exclusively of elderly patients (2 assessing valsartan; 1 risedronate). In only 4 of 37 reports (10.8%) for pioglitazone, 4 of 22 (18.2%) for risedronate, 3 of 29 (10.3%) for rosuvastatine and 9 of 67 (13.4%) for valsartan, the proportion of patients aged 65 or older was within or above that treated in clinical practice. In 62.2% of the reports for pioglitazone, 40.9% for risedronate, 37.9% for rosuvastatine, and 70.2% for valsartan, the proportion of patients aged 65 or older was lower than half that in the treated population. The representation of elderly people did not differ by publication date or sample size.
Elderly patients are poorly represented in RCTs of drugs they are likely to receive.
PMCID: PMC3316581  PMID: 22479411
24.  Patients' and Practitioners' Views of Knee Osteoarthritis and Its Management: A Qualitative Interview Study 
PLoS ONE  2011;6(5):e19634.
To identify the views of patients and care providers regarding the management of knee osteoarthritis (OA) and to reveal potential obstacles to improving health care strategies.
We performed a qualitative study based on semi-structured interviews of a stratified sample of 81 patients (59 women) and 29 practitioners (8 women, 11 general practitioners [GPs], 6 rheumatologists, 4 orthopedic surgeons, and 8 [4 GPs] delivering alternative medicine).
Two main domains of patient views were identified: one about the patient–physician relationship and the other about treatments. Patients feel that their complaints are not taken seriously. They also feel that practitioners act as technicians, paying more attention to the knee than to the individual, and they consider that not enough time is spent on information and counseling. They have negative perceptions of drugs and a feeling of medical uncertainty about OA, which leads to less compliance with treatment and a switch to alternative medicine. Patients believe that knee OA is an inevitable illness associated with age, that not much can be done to modify its evolution, that treatments are of little help, and that practitioners have not much to propose. They express unrealistic fears about the impact of knee OA on daily and social life. Practitioners' views differ from those of patients. Physicians emphasize the difficulty in elaborating treatment strategies and the need for a tool to help in treatment choice.
This qualitative study suggests several ways to improve the patient–practitioner relationship and the efficacy of treatment strategies, by increasing their acceptability and compliance. Providing adapted and formalized information to patients, adopting more global assessment and therapeutic approaches, and dealing more accurately with patients' paradoxal representation of drug therapy are main factors of improvement that should be addressed.
PMCID: PMC3088707  PMID: 21573185
25.  Geographical Representativeness of Published and Ongoing Randomized Controlled Trials. The Example of: Tobacco Consumption and HIV Infection 
PLoS ONE  2011;6(2):e16878.
The challenge for evidence-based healthcare is to reduce mortality and the burden of diseases. This study aimed to compare where research is conducted to where research is needed for 2 public health priorities: tobacco consumption and HIV infection.
We identified randomized controlled trials (RCTs) included in Cochrane systematic reviews published between 1997 and 2007 and registered ongoing RCTs identified in January 2009 through the World Health Organization's International Clinical Trials Registry Platform (WHO-ICTRP) evaluating interventions aimed at reducing or stopping tobacco use and treating or preventing HIV infection. We used the WHO and World Bank reports to classify the countries by income level, as well as map the global burden of disease and mortality attributable to tobacco use and HIV infection to the countries where the trials performed.
We evaluated 740 RCTs included in systematic reviews and 346 ongoing RCTs. For tobacco use, 4% of RCTs included in systematic reviews and 2% of ongoing trials were performed in low- and middle-income countries, even though these countries represented 70% of the mortality related to tobacco use. For HIV infection, 31% of RCTs included in systematic reviews and 33% of ongoing trials were performed in low- and middle-income countries, even though these countries represented 99% of the mortality related to HIV infection.
Our results highlight an important underrepresentation of low- and middle-income countries in currently available evidence (RCTs included in systematic reviews) and awaiting evidence (registered ongoing RCTs) for reducing or stopping tobacco use and treating or preventing HIV infection.
PMCID: PMC3036724  PMID: 21347383

Results 1-25 (43)