PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (71)
 

Clipboard (0)
None

Select a Filter Below

Journals
more »
Year of Publication
more »
1.  The Burden of Musculoskeletal Conditions 
PLoS ONE  2014;9(3):e90633.
Objective
Despite the burden of rheumatic and musculoskeletal diseases (RMDs), these conditions probably deserve more attention from public health authorities in several countries including developed ones. We assessed their contribution to disability.
Methods
Data on disabilities associated with RMDs were extracted from the national 2008–2009 Disability-Health Survey of 29,931 subjects representative of the population in France. We used the core set of disability categories for RMDs of the World Health Organization's International Classification of Functioning, Disability and Health for analysis. Diagnosis and disabilities were self-reported. We assessed the risk of disability associated with RMDs using odds ratios (ORs) and the societal impact of RMDs using the average attributable fraction (AAF).
Results
Overall 27.7% (about 17.3 million people) (95% CI 26.9–28.4%) of the population reported having RMDs. The most prevalent RMDs were low back pain (12.5%, 12.1–13.1) and osteoarthritis (12.3%, 11.8–12.7). People reporting osteoarthritis were more disabled in walking (adjusted OR 1.9, 1.7–2.2) than those without. People reporting inflammatory arthritis were more limited in activities of daily living (from 1.4, 1.2–1.8 for walking to 2.1, 1.5–2.9 for moving around). From a societal perspective, osteoarthritis was the main contributor to activity limitations (AAF 22% for walking difficulties). Changing jobs was mainly attributed to neck pain (AAF 13%) and low back pain (11.5%).
Conclusion
RMDs are highly prevalent and significantly affect activity limitations and participation restrictions. More effort is needed to improve care and research in this field.
doi:10.1371/journal.pone.0090633
PMCID: PMC3942474  PMID: 24595187
2.  Determinants of satisfaction 1 year after total hip arthroplasty: the role of expectations fulfilment 
Background
Between 7% and 15% of patients are dissatisfied after total hip arthroplasty (THA). To assess predictors and postoperative determinants of satisfaction and expectation fulfilment one year after (THA).
Methods
Before THA surgery, 132 patients from three tertiary care centres and their surgeons were interviewed to assess their expectations using the Hospital for Special Surgery Total Hip Replacement Expectations Survey (THR survey). One year after surgery, patients (n = 123) were contacted by phone to complete a questionnaire on expectation fulfilment (THR survey), satisfaction, functional outcome (Womac), and health-related quality of life (SF 12). Univariate and multivariate analyses were performed.
Results
Preoperative predictors of satisfaction were a good mental wellbeing (adjusted OR 1.09 [1.02; 1.16], p = 0.01) and optimistic surgeons expectations (1.07 [1.01; 1.14], p = 0.02). The main postoperative determinant of satisfaction was the fulfilment of patient’s expectations (1.08 [1.04; 1.12], p < 0.001). Expectation fulfilment could be predicted before surgery by young age (regression coefficient −0.55 [−0.88; -0.21], p = 0.002), good physical function (−0.96 [−1.82; -0.10], p = 0.03) and good mental wellbeing (0.56 [0.14; 0.99], p = 0.01). Postoperative determinants of expectation fulfilment were functional outcome (−2.10 [−2.79; -1.42], p <0.001) and pain relief (−14.83 [−22.38; -7.29], p < 0.001).
Conclusion
To improve patient satisfaction after THA, patients’ expectations and their fulfilment need to be carefully addressed. Patients with low mental wellbeing or physical function should be identified and specifically informed on expected surgical outcome. Surgeons’ expectations are predictive of satisfaction and information should aim to lower discrepancy between surgeons’ and patients’ expectations.
doi:10.1186/1471-2474-15-53
PMCID: PMC3936828  PMID: 24564856
Total hip arthroplasty; Expectations; Expectations’ fulfilment; Satisfaction; Outcome
3.  Mechanical ventilation and clinical practice heterogeneity in intensive care units: a multicenter case-vignette study 
Background
Observational studies on mechanical ventilation (MV) show practice variations across ICUs. We sought to determine, with a case-vignette study, the heterogeneity of processes of care in ICUs focusing on mechanical ventilation procedures, and whether organizational patterns or physician characteristics influence practice variations.
Methods
We conducted a cross-sectional multicenter study using the case-vignette methodology. Descriptive analyses were calculated for each organizational pattern and respondent characteristics. An Index of Qualitative Variation (IQV, from 0, no heterogeneity, to a maximum of 1) was calculated.
Results
Forty ICUs from France (N = 33) and Switzerland (N = 7) participated; 396 physicians answered our case-vignettes. There was major heterogeneity of management processes related to MV within and across centers (mean IQV per center 0.51, SD 0.09). We observed the lowest variability (mean IQV per question < 0.4) for questions related to intubation procedure, ventilation of acute respiratory distress syndrome and the use of the semirecumbent position. We observed a high variability (mean IQV per question > 0.6) for questions related to management of endotracheal tube or suctioning, management of sedation and analgesia, and respect of autonomy. Heterogeneity was independent of respondent characteristics and of the presence of written procedures. There was a correlation between the processes associated with the highest variability (mean IQV per question > 0.6) and the annual volume of ICU admission (r = 0.32 (0.01 to 0.58)) and MV (r = 0.38 (0.07 to 0.63)). Within ICUs there was a large heterogeneity regarding knowledge of a local written procedure.
Conclusions
Large clinical practice variations were found among ICUs. High volume centers were more likely to have heterogeneous practices. The presence of a local written procedure or respondent characteristics did not influence practice variation.
doi:10.1186/2110-5820-4-2
PMCID: PMC3922080  PMID: 24484902
Mechanical ventilation; Clinical practice; Volume-outcome; Protocols
4.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
5.  The Influence of Anti-Infective Periodontal Treatment on C-Reactive Protein: A Systematic Review and Meta-Analysis of Randomized Controlled Trials 
PLoS ONE  2013;8(10):e77441.
Background
Periodontal infections are hypothesized to increase the risk of adverse systemic outcomes through inflammatory mechanisms. The magnitude of effect, if any, of anti-infective periodontal treatment on systemic inflammation is unknown, as are the patient populations most likely to benefit. We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to test the hypothesis that anti-infective periodontal treatment reduces systemic c-reactive protein (CRP).
Methods and Findings
MEDLINE, EMBASE, CENTRAL and CINAHL databases were searched using sensitivity-enhancing search terms. Eligible RCTs enrolled patients with periodontal infection, compared a clearly defined anti-infective periodontal intervention (experimental group) to an “inactive control” (no periodontal intervention) or to an “active control” (lower treatment intensity than the experimental group). Mean differences in final CRP values at the earliest post-treatment time point (typically 1-3 months) between experimental and control groups were analyzed using random-effects regression. Among 2,753 possible studies 20 were selected, which included 2,561 randomized patients(median=57). Baseline CRP values were >3.0 mg/L in 40% of trials. Among studies with a control group receiving no treatment, the mean difference in CRP final values among experimental treatment vs. control groups was -0.37 mg/L [95%CI=-0.64, -0.11], (P=0.005), favoring experimental treatment. Trials for which the experimental group received antibiotics had stronger effects (P for interaction=0.03) and the mean difference in CRP final values among experimental treatment vs. control was -0.75 mg/L [95%CI=-1.17,-0.33]. No treatment effect was observed among studies using an active treatment comparator. Treatment effects were stronger for studies that included patients with co-morbidities vs. studies that included “systemically healthy” patients, although the interaction was not significant (P=0.48).
Conclusions
Anti-infective periodontal treatment results in short-term modest reductions in systemic CRP.
doi:10.1371/journal.pone.0077441
PMCID: PMC3796504  PMID: 24155956
6.  Translating Cochrane Reviews to Ensure that Healthcare Decision-Making is Informed by High-Quality Research Evidence 
PLoS Medicine  2013;10(9):e1001516.
Erik von Elm and colleagues discuss plans to increase access and global reach of Cochrane Reviews through translations into other languages.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001516
PMCID: PMC3775718  PMID: 24068897
7.  Incorporation of assessments of risk of bias of primary studies in systematic reviews of randomised trials: a cross-sectional study 
BMJ Open  2013;3(8):e003342.
Objective
We examined how assessments of risk of bias of primary studies are carried out and incorporated into the statistical analysis and overall findings of a systematic review.
Design
A cross-sectional review.
Sample
We assessed 200 systematic reviews of randomised trials published between January and March 2012; Cochrane (n=100), non-Cochrane (Database of Reviews of Effects) (n=100).
Main outcomes
Our primary outcome was a descriptive analysis of how assessments of risk of bias are carried out, the methods used, and the extent to which such assessments were incorporated into the statistical analysis and overall review findings.
Results
While Cochrane reviews routinely reported the method of risk of bias assessment and presented their results either in text or table format, 20% of non-Cochrane reviews failed to report the method used and 39% did not present the assessment results. Where it was possible to evaluate the individual results of the risk of bias assessment (n=154), 75% (n=116/154) of reviews had ≥1 trial at high risk of bias; the median proportion of trials per review at high risk of bias was 50% (IQR 31% to 89%). Despite this, only 56% (n=65/116) incorporated the risk of bias assessment into the interpretation of the results in the abstract and 41% (n=47/116) (49%; n=40/81 Cochrane and 20%; n=7/35 non-Cochrane) incorporated the risk of bias assessment into the interpretation of the conclusions. Of the 83% (n=166/200) systematic reviews which included a meta-analysis, only 11% (n=19/166) incorporated the risk of bias assessment into the statistical analysis.
Conclusions
Cochrane reviews were more likely than non-Cochrane reviews to report how risk of bias assessments of primary studies were carried out; however, both frequently failed to take such assessments into account in the statistical analysis and conclusions of the systematic review.
doi:10.1136/bmjopen-2013-003342
PMCID: PMC3753473  PMID: 23975265
Statistics & Research Methods; Epidemiology; General Medicine (see Internal Medicine)
8.  Agreement among Healthcare Professionals in Ten European Countries in Diagnosing Case-Vignettes of Surgical-Site Infections 
PLoS ONE  2013;8(7):e68618.
Objective
Although surgical-site infection (SSI) rates are advocated as a major evaluation criterion, the reproducibility of SSI diagnosis is unknown. We assessed agreement in diagnosing SSI among specialists involved in SSI surveillance in Europe.
Methods
Twelve case-vignettes based on suspected SSI were submitted to 100 infection-control physicians (ICPs) and 86 surgeons in 10 European countries. Each participant scored eight randomly-assigned case-vignettes on a secure online relational database. The intra-class correlation coefficient (ICC) was used to assess agreement for SSI diagnosis on a 7-point Likert scale and the kappa coefficient to assess agreement for SSI depth on a three-point scale.
Results
Intra-specialty agreement for SSI diagnosis ranged across countries and specialties from 0.00 (95%CI, 0.00–0.35) to 0.65 (0.45–0.82). Inter-specialty agreement varied from 0.04 (0.00–0.62) in to 0.55 (0.37–0.74) in Germany. For all countries pooled, intra-specialty agreement was poor for surgeons (0.24, 0.14–0.42) and good for ICPs (0.41, 0.28–0.61). Reading SSI definitions improved agreement among ICPs (0.57) but not surgeons (0.09). Intra-specialty agreement for SSI depth ranged across countries and specialties from 0.05 (0.00–0.10) to 0.50 (0.45–0.55) and was not improved by reading SSI definition.
Conclusion
Among ICPs and surgeons evaluating case-vignettes of suspected SSI, considerable disagreement occurred regarding the diagnosis, with variations across specialties and countries.
doi:10.1371/journal.pone.0068618
PMCID: PMC3706413  PMID: 23874690
9.  Comparison of Treatment Effect Estimates for Pharmacological Randomized Controlled Trials Enrolling Older Adults Only and Those including Adults: A Meta-Epidemiological Study 
PLoS ONE  2013;8(5):e63677.
Context
Older adults are underrepresented in clinical research. To assess therapeutic efficacy in older patients, some randomized controlled trials (RCTs) include older adults only.
Objective
To compare treatment effects between RCTs including older adults only (elderly RCTs) and RCTs including all adults (adult RCTs) by a meta-epidemiological approach.
Methods
All systematic reviews published in the Cochrane Library (Issue 4, 2011) were screened. Eligible studies were meta-analyses of binary outcomes of pharmacologic treatment including at least one elderly RCT and at least one adult RCT. For each meta-analysis, we compared summary odds ratios for elderly RCTs and adult RCTs by calculating a ratio of odds ratios (ROR). A summary ROR was estimated across all meta-analyses.
Results
We selected 55 meta-analyses including 524 RCTs (17% elderly RCTs). The treatment effects differed beyond that expected by chance for 7 (13%) meta-analyses, showing more favourable treatment effects in elderly RCTs in 5 cases and in adult RCTs in 2 cases. The summary ROR was 0.91 (95% CI, 0.77–1.08, p = 0.28), with substantial heterogeneity (I2 = 51% and τ2 = 0.14). Sensitivity and subgroup analyses by type-of-age RCT (elderly RCTs vs RCTs excluding older adults and vs RCTs of mixed-age adults), type of outcome (mortality or other) and type of comparator (placebo or active drug) yielded similar results.
Conclusions
The efficacy of pharmacologic treatments did not significantly differ, on average, between RCTs including older adults only and RCTs of all adults. However, clinically important discrepancies may occur and should be considered when generalizing evidence from all adults to older adults.
doi:10.1371/journal.pone.0063677
PMCID: PMC3665786  PMID: 23723992
10.  Serum Levels of Beta2-Microglobulin and Free Light Chains of Immunoglobulins Are Associated with Systemic Disease Activity in Primary Sjögren’s Syndrome. Data at Enrollment in the Prospective ASSESS Cohort 
PLoS ONE  2013;8(5):e59868.
Objectives
To analyze the clinical and immunological characteristics at enrollment in a large prospective cohort of patients with primary Sjögren's syndrome (pSS) and to investigate the association between serum BAFF, beta2-microglobulin and free light chains of immunoglobulins and systemic disease activity at enrollment.
Methods
Three hundred and ninety five patients with pSS according to American-European Consensus Criteria were included from fifteen centers of Rheumatology and Internal Medicine in the “Assessment of Systemic Signs and Evolution of Sjögren's Syndrome” (ASSESS) 5-year prospective cohort. At enrollment, serum markers were assessed as well as activity of the disease measured with the EULAR Sjögren's Syndrome Disease Activity Index (ESSDAI).
Results
Patient median age was 58 (25th–75th: 51–67) and median disease duration was 5 (2–9) years. Median ESSDAI at enrollment was 2 (0–7) with 30.9% of patients having features of systemic involvement. Patients with elevated BAFF, beta2-microglobulin and kappa, lambda FLCS had higher ESSDAI scores at enrollment (4 [2]–[11] vs 2 [0–7], P = 0.03; 4 [1]–[11] vs 2 [0–7], P< 0.0001); 4 [2]–[10] vs 2 [0–6.6], P< 0.0001 and 4 [2–8.2] vs 2 [0–7.0], P = 0.02, respectively). In multivariate analysis, increased beta2-microglobulin, kappa and lambda FLCs were associated with a higher ESSDAI score. Median BAFF and beta2-microglobulin were higher in the 16 patients with history of lymphoma (1173.3(873.1–3665.5) vs 898.9 (715.9–1187.2) pg/ml, P = 0.01 and 2.6 (2.2–2.9) vs 2.1 (1.8–2.6) mg/l, P = 0.04, respectively).
Conclusion
In pSS, higher levels of beta2-microglobulin and free light chains of immunoglobulins are associated with increased systemic disease activity.
doi:10.1371/journal.pone.0059868
PMCID: PMC3663789  PMID: 23717383
11.  Use of Trial Register Information during the Peer Review Process 
PLoS ONE  2013;8(4):e59910.
Introduction
Evidence in the medical literature suggests that trial registration may not be preventing selective reporting of results. We wondered about the place of such information in the peer-review process.
Method
We asked 1,503 corresponding authors of clinical trials and 1,733 reviewers to complete an online survey soliciting their views on the use of trial registry information during the peer-review process.
Results
1,136 authors (n = 713) and reviewers (n = 423) responded (37.5%); 676 (59.5%) had reviewed an article reporting a clinical trial in the past 2 years. Among these, 232 (34.3%) examined information registered on a trial registry. If one or more items (primary outcome, eligibility criteria, etc.) differed between the registry record and the manuscript, 206 (88.8%) mentioned the discrepancy in their review comments, 46 (19.8%) advised editors not to accept the manuscript, and 8 did nothing. The reviewers' reasons for not using the trial registry information included a lack of registration number in the manuscript (n = 132; 34.2%), lack of time (n = 128; 33.2%), lack of usefulness of registered information for peer review (n = 100; 25.9%), lack of awareness about registries (n = 54; 14%), and excessive complexity of the process (n = 39; 10.1%).
Conclusion
This survey revealed that only one-third of the peer reviewers surveyed examined registered trial information and reported any discrepancies to journal editors.
doi:10.1371/journal.pone.0059910
PMCID: PMC3622662  PMID: 23593154
12.  Reporting of analyses from randomized controlled trials with multiple arms: a systematic review 
BMC Medicine  2013;11:84.
Background
Multiple-arm randomized trials can be more complex in their design, data analysis, and result reporting than two-arm trials. We conducted a systematic review to assess the reporting of analyses in reports of randomized controlled trials (RCTs) with multiple arms.
Methods
The literature in the MEDLINE database was searched for reports of RCTs with multiple arms published in 2009 in the core clinical journals. Two reviewers extracted data using a standardized extraction form.
Results
In total, 298 reports were identified. Descriptions of the baseline characteristics and outcomes per group were missing in 45 reports (15.1%) and 48 reports (16.1%), respectively. More than half of the articles (n = 171, 57.4%) reported that a planned global test comparison was used (that is, assessment of the global differences between all groups), but 67 (39.2%) of these 171 articles did not report details of the planned analysis. Of the 116 articles reporting a global comparison test, 12 (10.3%) did not report the analysis as planned. In all, 60% of publications (n = 180) described planned pairwise test comparisons (that is, assessment of the difference between two groups), but 20 of these 180 articles (11.1%) did not report the pairwise test comparisons. Of the 204 articles reporting pairwise test comparisons, the comparisons were not planned for 44 (21.6%) of them. Less than half the reports (n = 137; 46%) provided baseline and outcome data per arm and reported the analysis as planned.
Conclusions
Our findings highlight discrepancies between the planning and reporting of analyses in reports of multiple-arm trials.
doi:10.1186/1741-7015-11-84
PMCID: PMC3621416  PMID: 23531230
Systematic review; Randomized controlled trials; Multiple arms; Reporting of analyses
13.  Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors 
Background:
Clinical trials are commonly done without blinded outcome assessors despite the risk of bias. We wanted to evaluate the effect of nonblinded outcome assessment on estimated effects in randomized clinical trials with outcomes that involved subjective measurement scales.
Methods:
We conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two investigators agreed on the inclusion of trials and the outcome scale. For each trial, we calculated the difference in effect size (i.e., standardized mean difference between nonblinded and blinded assessments). A difference in effect size of less than 0 suggested that nonblinded assessors generated more optimistic estimates of effect. We pooled the differences in effect size using inverse variance random-effects meta-analysis and used metaregression to identify potential reasons for variation.
Results:
We included 24 trials in our review. The main meta-analysis included 16 trials (involving 2854 patients) with subjective outcomes. The estimated treatment effect was more beneficial when based on nonblinded assessors (pooled difference in effect size −0.23 [95% confidence interval (CI) −0.40 to −0.06]). In relative terms, nonblinded assessors exaggerated the pooled effect size by 68% (95% CI 14% to 230%). Heterogeneity was moderate (I2 = 46%, p = 0.02) and unexplained by metaregression.
Interpretation:
We provide empirical evidence for observer bias in randomized clinical trials with subjective measurement scale outcomes. A failure to blind assessors of outcomes in such trials results in a high risk of substantial bias.
doi:10.1503/cmaj.120744
PMCID: PMC3589328  PMID: 23359047
14.  Development and Validation of a Questionnaire Assessing Fears and Beliefs of Patients with Knee Osteoarthritis: The Knee Osteoarthritis Fears and Beliefs Questionnaire (KOFBeQ) 
PLoS ONE  2013;8(1):e53886.
Objective
We aimed to develop a questionnaire assessing fears and beliefs of patients with knee OA.
Design
We sent a detailed document reporting on a qualitative analysis of interviews of patients with knee OA to experts, and a Delphi procedure was adopted for item generation. Then, 80 physicians recruited 566 patients with knee OA to test the provisional questionnaire. Items were reduced according to their metric properties and exploratory factor analysis. Reliability was tested by the Cronbach α coefficient. Construct validity was tested by divergent validity and confirmatory factor analysis. Test–retest reliability was assessed by the intra-class correlation coefficient (ICC) and the Bland and Altman technique.
Results
137 items were extracted from analysis of the interview data. Three Delphi rounds were needed to obtain consensus on a 25-item provisional questionnaire. The item-reduction process resulted in an 11-item questionnaire. Selected items represented fears and beliefs about daily living activities (3 items), fears and beliefs about physicians (4 items), fears and beliefs about the disease (2 items), and fears and beliefs about sports and leisure activities (2 items). The Cronbach α coefficient of global score was 0.85. We observed expected divergent validity. Confirmation factor analyses confirmed higher intra-factor than inter-factor correlations. Test–retest reliability was good, with an ICC of 0.81, and Bland and Altman analysis did not reveal a systematic trend.
Conclusions
We propose an 11-item questionnaire assessing patients' fears and beliefs concerning knee OA with good content and construct validity.
doi:10.1371/journal.pone.0053886
PMCID: PMC3549996  PMID: 23349757
15.  Impact of evergreening on patients and health insurance: a meta analysis and reimbursement cost analysis of citalopram/escitalopram antidepressants 
BMC Medicine  2012;10:142.
Background
"Evergreening" refers to the numerous strategies whereby owners of pharmaceutical products use patent laws and minor drug modifications to extend their monopoly privileges on the drug. We aimed to evaluate the impact of evergreening through the case study of the antidepressant citalopram and its chiral switch form escitalopram by evaluating treatment efficacy and acceptability for patients, as well as health insurance costs for society.
Methods
To assess efficacy and acceptability, we performed meta-analyses for efficacy and acceptability. We compared direct evidence (meta-analysis of results of head-to-head trials) and indirect evidence (adjusted indirect comparison of results of placebo-controlled trials). To assess health insurance costs, we analyzed individual reimbursement data from a representative sample of the French National Health Insurance Inter-regime Information System (SNIIR-AM) from 2003 to 2010, which allowed for projecting these results to the whole SNIIR-AM population (53 million people).
Results
In the meta-analysis of seven head-to-head trials (2,174 patients), efficacy was significantly better for escitalopram than citalopram (combined odds ratio (OR) 1.60 (95% confidence interval 1.05 to 2.46)). However, for the adjusted indirect comparison of 10 citalopram and 12 escitalopram placebo-controlled trials, 2,984 and 3,777 patients respectively, efficacy was similar for the two drug forms (combined indirect OR 1.03 (0.82 to 1.30)). Because of the discrepancy, we could not combine direct and indirect data (test of inconsistency, P = 0.07). A similar discrepancy was found for treatment acceptability. The overall reimbursement cost burden for the citalopram, escitalopram and its generic forms was 120.6 million Euros in 2010, with 96.8 million Euros for escitalopram.
Conclusions
The clinical benefit of escitalopram versus citalopram remains uncertain. In our case of evergreening, escitalopram represented a substantially high proportion of the overall reimbursement cost burden as compared with citalopram and the generic forms.
doi:10.1186/1741-7015-10-142
PMCID: PMC3520785  PMID: 23167972
Evergreening; Meta-analysis; Health Insurance Reimbursement; Escitalopram; Citalopram; Chiral switch; French health information system; generic drugs
16.  Case-Only Designs in Pharmacoepidemiology: A Systematic Review 
PLoS ONE  2012;7(11):e49444.
Background
Case-only designs have been used since late 1980’s. In these, as opposed to case-control or cohort studies for instance, only cases are required and are self-controlled, eliminating selection biases and confounding related to control subjects, and time-invariant characteristics. The objectives of this systematic review were to analyze how the two main case-only designs – case-crossover (CC) and self-controlled case series (SCCS) – have been applied and reported in pharmacoepidemiology literature, in terms of applicability assumptions and specificities of these designs.
Methodology/Principal Findings
We systematically selected all reports in this field involving case-only designs from MEDLINE and EMBASE up to September 15, 2010. Data were extracted using a standardized form. The analysis included 93 reports 50 (54%) of CC and 45 (48%) SCCS, 2 reports combined both designs. In 12 (24%) CC and 18 (40%) SCCS articles, all applicable validity assumptions of the designs were fulfilled, respectively. Fifty (54%) articles (15 CC (30%) and 35 (78%) SCCS) adequately addressed the specificities of the case-only analyses in the way they reported results.
Conclusions/Significance
Our systematic review underlines that implementation of CC and SCCS designs needs to be more rigorous with regard to validity assumptions, as well as improvement in results reporting.
doi:10.1371/journal.pone.0049444
PMCID: PMC3500300  PMID: 23166668
17.  Adjustment for reporting bias in network meta-analysis of antidepressant trials 
Background
Network meta-analysis (NMA), a generalization of conventional MA, allows for assessing the relative effectiveness of multiple interventions. Reporting bias is a major threat to the validity of MA and NMA. Numerous methods are available to assess the robustness of MA results to reporting bias. We aimed to extend such methods to NMA.
Methods
We introduced 2 adjustment models for Bayesian NMA. First, we extended a meta-regression model that allows the effect size to depend on its standard error. Second, we used a selection model that estimates the propensity of trial results being published and in which trials with lower propensity are weighted up in the NMA model. Both models rely on the assumption that biases are exchangeable across the network. We applied the models to 2 networks of placebo-controlled trials of 12 antidepressants, with 74 trials in the US Food and Drug Administration (FDA) database but only 51 with published results. NMA and adjustment models were used to estimate the effects of the 12 drugs relative to placebo, the 66 effect sizes for all possible pair-wise comparisons between drugs, probabilities of being the best drug and ranking of drugs. We compared the results from the 2 adjustment models applied to published data and NMAs of published data and NMAs of FDA data, considered as representing the totality of the data.
Results
Both adjustment models showed reduced estimated effects for the 12 drugs relative to the placebo as compared with NMA of published data. Pair-wise effect sizes between drugs, probabilities of being the best drug and ranking of drugs were modified. Estimated drug effects relative to the placebo from both adjustment models were corrected (i.e., similar to those from NMA of FDA data) for some drugs but not others, which resulted in differences in pair-wise effect sizes between drugs and ranking.
Conclusions
In this case study, adjustment models showed that NMA of published data was not robust to reporting bias and provided estimates closer to that of NMA of FDA data, although not optimal. The validity of such methods depends on the number of trials in the network and the assumption that conventional MAs in the network share a common mean bias mechanism.
doi:10.1186/1471-2288-12-150
PMCID: PMC3537713  PMID: 23016799
Network meta-analysis; Publication bias; Small-study effect
18.  Respective Contribution of Chronic Conditions to Disability in France: Results from the National Disability-Health Survey 
PLoS ONE  2012;7(9):e44994.
Background
Representative national data on disability are becoming increasingly important in helping policymakers decide on public health strategies. We assessed the respective contribution of chronic health conditions to disability for three age groups (18–40, 40–65, and >65 years old) using data from the 2008–2009 Disability-Health Survey in France.
Methods
Data on 12 chronic conditions and on disability for 24,682 adults living in households were extracted from the Disability-Health Survey results. A weighting factor was applied to obtain representative estimates for the French population. Disability was defined as at least one restriction in activities of daily living (ADL), severe disability as the inability to perform at least one ADL alone, and self-reported disability as a general feeling of being disabled. To account for co-morbidities, we assessed the contribution of each chronic disorder to disability by using the average attributable fraction (AAF).
Findings
We estimated that 38.8 million people in France (81.7% [95% CI 80.9;82.6]) had a chronic condition: 14.3% (14.0;14.6) considered themselves disabled, 4.6% (4.4;4.9) were restricted in ADL and 1.7% (1.5;1.8) were severely disabled. Musculoskeletal and sensorial impairments contributed the most to self-reported disability (AAF 15.4% and 12.3%). Neurological and musculoskeletal diseases had the largest impact on disability (AAF 17.4% and 16.4%, respectively). Neurological disorders contributed the most to severe disability (AAF 31.0%). Psychiatric diseases contributed the most to disability categories for patients 18–40 years old (AAFs 23.8%–40.3%). Cardiovascular conditions were also among the top four contributors to disability categories (AAFs 8.5%–11.1%).
Conclusions
Neurological, musculoskeletal, and cardiovascular chronic disorders mainly contribute to disability in France. Psychiatric impairments have a heavy burden for people 18–40 years old. These findings should help policymakers define priorities for health-service delivery in France and perhaps other developed countries.
doi:10.1371/journal.pone.0044994
PMCID: PMC3443206  PMID: 23024781
19.  Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study 
PLoS Medicine  2012;9(9):e1001308.
A study conducted by Amélie Yavchitz and colleagues examines the factors associated with “spin” (specific reporting strategies, intentional or unintentional, that emphasize the beneficial effect of treatments) in press releases of clinical trials.
Background
Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.
Methods and Findings
We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.
“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.
Conclusion
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
Editors' Summary
Background
The mass media play an important role in disseminating the results of medical research. Every day, news items in newspapers and magazines and on the television, radio, and internet provide the general public with information about the latest clinical studies. Such news items are written by journalists and are often based on information in “press releases.” These short communications, which are posted on online databases such as EurekAlert! and sent directly to journalists, are prepared by researchers or more often by the drug companies, funding bodies, or institutions supporting the clinical research and are designed to attract favorable media attention to newly published research results. Press releases provide journalists with the information they need to develop and publish a news story, including a link to the peer-reviewed journal (a scholarly periodical containing articles that have been judged by independent experts) in which the research results appear.
Why Was This Study Done?
In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”—reporting that emphasizes the beneficial effects of the experimental (new) treatment. For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment. “Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”. In this study, the researchers evaluate the presence of “spin” in press releases and associated media coverage and analyze whether the interpretation of RCT results based on press releases and associated news items could lead to the misinterpretation of RCT results.
What Did the Researchers Do and Find?
The researchers identified 70 press releases indexed in EurekAlert! over a 4-month period that described two-arm, parallel-group RCTs. They used Lexis Nexis, a database of news reports from around the world, to identify associated news items for 41 of these press releases and then analyzed the press releases, news items, and abstracts of the scientific articles related to each press release for “spin”. Finally, they interpreted the results of the RCTs using each source of information independently. Nearly half the press releases and article abstract conclusions contained “spin” and, importantly, “spin” in the press releases was associated with “spin” in the article abstracts. The researchers overestimated the benefits of the experimental treatment from the press release as compared to the full-text peer-reviewed article for 27% of reports. Factors that were associated with this overestimation of treatment benefits included publication in a specialized journal and having “spin” in the press release. Of the news items related to press releases, half contained “spin”, usually of the same type as identified in the press release and article abstract. Finally, the researchers overestimated the benefit of the experimental treatment from the news item as compared to the full-text peer-reviewed article in 24% of cases.
What Do These Findings Mean?
These findings show that “spin” in press releases and news reports is related to the presence of “spin” in the abstract of peer-reviewed reports of RCTs and suggest that the interpretation of RCT results based solely on press releases or media coverage could distort the interpretation of research findings in a way that favors experimental treatments. This interpretation shift is probably related to the presence of “spin” in peer-reviewed article abstracts, press releases, and news items and may be partly responsible for a mismatch between the perceived and real beneficial effects of new treatments among the general public. Overall, these findings highlight the important role that journal reviewers and editors play in disseminating research findings. These individuals, the researchers conclude, have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001308.
The PLOS Hub for Clinical Trials, which collects PLOS journals relating to clinical trials, includes some other articles on “spin” in clinical trial reports
EurekAlert is an online free database for science press releases
The UK National Health Service Choices website includes Beyond the Headlines, a resource that provides an unbiased and evidence-based analysis of health stories that make the news for both the public and health professionals
The US-based organization HealthNewsReview, a project supported by the Foundation for Informed Medical Decision Making, also provides expert reviews of news stories
doi:10.1371/journal.pmed.1001308
PMCID: PMC3439420  PMID: 22984354
20.  Outcomes in Registered, Ongoing Randomized Controlled Trials of Patient Education 
PLoS ONE  2012;7(8):e42934.
Background
With the increasing prevalence of chronic noncommunicable diseases, patient education is becoming important to strengthen disease prevention and control. We aimed to systematically determine the extent to which registered, ongoing randomized controlled trials (RCTs) evaluated an educational intervention focus on patient-important outcomes (i.e., outcomes measuring patient health status and quality of life).
Methods
On May 6, 2009, we searched for all ongoing RCTs registered in the World Health Organization International Clinical Trials Registry platform. We used a standardized data extraction form to collect data and determined whether the outcomes assessed were 1) patient-important outcomes such as clinical events, functional status, pain, or quality of life or 2) surrogate outcomes, such as biological outcome, treatment adherence, or patient knowledge.
Principal Findings
We selected 268 of the 642 potentially eligible studies and assessed a random sample of 150. Patient-important outcomes represented 54% (178 of 333) of all primary outcomes and 46% (286 of 623) of all secondary outcomes. Overall, 69% of trials (104 of 150) used at least one patient-important outcome as a primary outcome and 66% (99 of 150) as a secondary outcome. Finally, for 31% of trials (46 of 150), primary outcomes were only surrogate outcomes. The results varied by medical area. In neuropsychiatric disorders, patient important outcomes represented 84% (51 of 61) of primary outcomes, as compared with 54% (32 of 59) in malignant neoplasm and 18% (4 of 22) in diabetes mellitus trials.
In addition, only 35% assessed the long-term impact of interventions (i.e., >6 months).
Conclusions
There is a need to improve the relevance of outcomes and to assess the long term impact of educational interventions in RCTs.
doi:10.1371/journal.pone.0042934
PMCID: PMC3420885  PMID: 22916183
21.  ASSIST Applicability Scoring of Surgical trials. An Investigator-reported aSsessment Tool 
PLoS ONE  2012;7(8):e42258.
Context
We aimed to develop a new tool for assessing and depicting the applicability of the results of surgical randomized controlled trials (RCTs) from the trial investigators' perspective.
Methods
We identified all items related to applicability by a systematic methodological review, and then a sample of surgeons used these items in a web-based survey to evaluate the applicability of their own trial results. For each applicability item, participants had to indicate on a numerical scale that was simplified as a three-item scale: 1) items essential to consider, 2) items requiring attention, and 3) items inconsequential to the applicability of the results of their own RCT to clinical practice. For the final tool, we selected only items that were rated as being essential or requiring attention for at least 25% of the trials evaluated. We propose a specific process to construct the tool and to depict applicability in a graph. We identified all investigators of published and registered ongoing RCTs assessing surgery and invited them to participate in the web-based survey.
Results
148 surgeons assessed applicability for their own trial and participated in the process of item selection. The final tool contains 22 items (4 dedicated to patients, 5 to centers, 5 to surgeons and 8 to the intervention). We proposed a straightforward process of constructing the graphical tool: 1) a multidisciplinary team of investigators or other care providers participating in the trial could independently assess each item, 2) a consensus method could be used, and 3) the investigators could depict their assessment of the applicability of the trial results in 4 graphs related to patients, centers, surgeons and the intervention.
Conclusions
This investigator-reported assessment tool could help readers define under what conditions they could reasonably apply the results of a surgical RCT to their clinical practice.
doi:10.1371/journal.pone.0042258
PMCID: PMC3419723  PMID: 22916125
22.  A home-visiting intervention targeting determinants of infant mental health: the study protocol for the CAPEDP randomized controlled trial in France 
BMC Public Health  2012;12:648.
Background
Several studies suggest that the number of risk factors rather than their nature is key to mental health disorders in childhood.
Method and design
The objective of this multicentre randomized controlled parallel trial (PROBE methodology) is to assess the impact in a multi-risk French urban sample of a home-visiting program targeting child mental health and its major determinants. This paper describes the protocol of this study. In the study, pregnant women were eligible if they were: living in the intervention area; able to speak French, less than 26 years old; having their first child; less than 27 weeks of amenorrhea; and if at least one of the following criteria were true: less than twelve years of education, intending to bring up their child without the presence of the child’s father, and 3) low income. Participants were randomized into either the intervention or the control group. All had access to usual care in mother-child centres and community mental health services free of charge in every neighbourhood. Psychologists conducted all home visits, which were planned on a weekly basis from the 7th month of pregnancy and progressively decreasing in frequency until the child’s second birthday. Principle outcome measures included child mental health at 24 months and two major mediating variables for infant mental health: postnatal maternal depression and the quality of the caring environment. A total of 440 families were recruited, of which a subsample of 120 families received specific attachment and caregiver behaviour assessment. Assessment was conducted by an independent assessment team during home visits and, for the attachment study, in a specifically created Attachment Assessment laboratory.
Discussion
The CAPEDP study is the first large-scale randomised, controlled infant mental health promotion programme to take place in France. A major specificity of the program was that all home visits were conducted by specifically trained, supervised psychologists rather than nurses. Significant challenges included designing a mental health promotion programme targeting vulnerable families within one of the most generous but little assessed health and social care systems in the Western World.
Trial registration
Current Clinical trial number is NCT00392847.
doi:10.1186/1471-2458-12-648
PMCID: PMC3490937  PMID: 22888979
Prevention; Mental health promotion; Home visiting; Infant mental health; Postnatal depression; Security of attachment and attachment disorganisation in infants; Randomized controlled trial
23.  Participant Informed Consent in Cluster Randomized Trials: Review 
PLoS ONE  2012;7(7):e40436.
Background
The Nuremberg code defines the general ethical framework of medical research with participant consent as its cornerstone. In cluster randomized trials (CRT), obtaining participant informed consent raises logistic and methodologic concerns. First, with randomization of large clusters such as geographical areas, obtaining individual informed consent may be impossible. Second, participants in randomized clusters cannot avoid certain interventions, which implies that participant informed consent refers only to data collection, not administration of an intervention. Third, complete participant information may be a source of selection bias, which then raises methodological concerns. We assessed whether participant informed consent was required in such trials, which type of consent was required, and whether the trial was at risk of selection bias because of the very nature of participant information.
Methods and Findings
We systematically reviewed all reports of CRT published in MEDLINE in 2008 and surveyed corresponding authors regarding the nature of the informed consent and the process of participant inclusion. We identified 173 reports and obtained an answer from 113 authors (65.3%). In total, 23.7% of the reports lacked information on ethics committee approval or participant consent, 53.1% of authors declared that participant consent was for data collection only and 58.5% that the group allocation was not specified for participants. The process of recruitment (chronology of participant recruitment with regard to cluster randomization) was rarely reported, and we estimated that only 56.6% of the trials were free of potential selection bias.
Conclusions
For CRTs, the reporting of ethics committee approval and participant informed consent is less than optimal. Reports should describe whether participants consented for administration of an intervention and/or data collection. Finally, the process of participant recruitment should be fully described (namely, whether participants were informed of the allocation group before being recruited) for a better appraisal of the risk of selection bias.
doi:10.1371/journal.pone.0040436
PMCID: PMC3391275  PMID: 22792319
24.  Development and description of measurement properties of an instrument to assess treatment burden among patients with multiple chronic conditions 
BMC Medicine  2012;10:68.
Background
Patients experience an increasing treatment burden related to everything they do to take care of their health: visits to the doctor, medical tests, treatment management and lifestyle changes. This treatment burden could affect treatment adherence, quality of life and outcomes. We aimed to develop and validate an instrument for measuring treatment burden for patients with multiple chronic conditions.
Methods
Items were derived from a literature review and qualitative semistructured interviews with patients. The instrument was then validated in a sample of patients with chronic conditions recruited in hospitals and general practitioner clinics in France. Factor analysis was used to examine the questionnaire structure. Construct validity was studied by the relationships between the instrument's global score, the Treatment Satisfaction Questionnaire for Medication (TSQM) scores and the complexity of treatment as assessed by patients and physicians. Agreement between patients and physicians was appraised. Reliability was determined by a test-retest method.
Results
A sample of 502 patients completed the Treatment Burden Questionnaire (TBQ), which consisted of 7 items (2 of which had 4 subitems) defined after 22 interviews with patients. The questionnaire showed a unidimensional structure. The Cronbach's α was 0.89. The instrument's global score was negatively correlated with TSQM scores (rs = -0.41 to -0.53) and positively correlated with the complexity of treatment (rs = 0.16 to 0.40). Agreement between patients and physicians (n = 396) was weak (intraclass correlation coefficient 0.38 (95% confidence interval 0.29 to 0.47)). Reliability of the retest (n = 211 patients) was 0.76 (0.67 to 0.83).
Conclusions
This study provides the first valid and reliable instrument assessing the treatment burden for patients across any disease or treatment context. This instrument could help in the development of treatment strategies that are both efficient and acceptable for patients.
doi:10.1186/1741-7015-10-68
PMCID: PMC3402984  PMID: 22762722
chronic disease/therapy; patient participation; physician-patient relations; quality of life; questionnaires; workload
25.  Inadequate description of educational interventions in ongoing randomized controlled trials 
Trials  2012;13:63.
Background
The registration of clinical trials has been promoted to prevent publication bias and increase research transparency. Despite general agreement about the minimum amount of information needed for trial registration, we lack clear guidance on descriptions of non-pharmacologic interventions in trial registries. We aimed to evaluate the quality of registry descriptions of non-pharmacologic interventions assessed in ongoing randomized controlled trials (RCTs) of patient education.
Methods
On 6 May 2009, we searched for all ongoing RCTs registered in the 10 trial registries accessible through the World Health Organization International Clinical Trials Registry Platform. We included trials evaluating an educational intervention (that is, designed to teach or train patients about their own health) and dedicated to participants, their family members or home caregivers. We used a standardized data extraction form to collect data related to the description of the experimental intervention, the centers, and the caregivers.
Results
We selected 268 of 642 potentially eligible studies and appraised a random sample of 150 records. All selected trials were registered in 4 registers, mainly ClinicalTrials.gov (61%). The median [interquartile range] target sample size was 205 [100 to 400] patients. The comparator was mainly usual care (47%) or active treatment (47%). A minority of records (17%, 95% CI 11 to 23%) reported an overall adequate description of the intervention (that is, description that reported the content, mode of delivery, number, frequency, duration of sessions and overall duration of the intervention). Further, for most reports (59%), important information about the content of the intervention was missing. The description of the mode of delivery of the intervention was reported for 52% of studies, the number of sessions for 74%, the frequency of sessions for 58%, the duration of each session for 45% and the overall duration for 63%. Information about the caregivers was missing for 70% of trials. Most trials (73%) took place in the United States or United Kingdom, 64% involved only one centre, and participating centers were mainly tertiary-care, academic or university hospitals (51%).
Conclusions
Educational interventions assessed in ongoing RCTs of educational interventions are poorly described in trial registries. The lack of adequate description raises doubts about the ability of trial registration to help patients and researchers know about the treatment evaluated in trials of education.
doi:10.1186/1745-6215-13-63
PMCID: PMC3503701  PMID: 22607344

Results 1-25 (71)