PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-11 (11)
 

Clipboard (0)
None
Journals
Year of Publication
1.  The Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials 
PLoS Medicine  2012;9(11):e1001346.
The Ottawa Ethics of Cluster Trials Consensus Group sets out 15 recommendations for the ethical design and conduct of cluster randomized trials.
doi:10.1371/journal.pmed.1001346
PMCID: PMC3502500  PMID: 23185138
2.  Growing Literature, Stagnant Science? Systematic Review, Meta-Regression and Cumulative Analysis of Audit and Feedback Interventions in Health Care 
Journal of General Internal Medicine  2014;29(11):1534-1541.
ABSTRACT
BACKGROUND
This paper extends the findings of the Cochrane systematic review of audit and feedback on professional practice to explore the estimate of effect over time and examine whether new trials have added to knowledge regarding how optimize the effectiveness of audit and feedback.
METHODS
We searched the Cochrane Central Register of Controlled Trials, MEDLINE, and EMBASE for randomized trials of audit and feedback compared to usual care, with objectively measured outcomes assessing compliance with intended professional practice. Two reviewers independently screened articles and abstracted variables related to the intervention, the context, and trial methodology. The median absolute risk difference in compliance with intended professional practice was determined for each study, and adjusted for baseline performance. The effect size across studies was recalculated as studies were added to the cumulative analysis. Meta-regressions were conducted for studies published up to 2002, 2006, and 2010 in which characteristics of the intervention, the recipients, and trial risk of bias were tested as predictors of effect size.
RESULTS
Of the 140 randomized clinical trials (RCTs) included in the Cochrane review, 98 comparisons from 62 studies met the criteria for inclusion. The cumulative analysis indicated that the effect size became stable in 2003 after 51 comparisons from 30 trials. Cumulative meta-regressions suggested new trials are contributing little further information regarding the impact of common effect modifiers. Feedback appears most effective when: delivered by a supervisor or respected colleague; presented frequently; featuring both specific goals and action-plans; aiming to decrease the targeted behavior; baseline performance is lower; and recipients are non-physicians.
DISCUSSION
There is substantial evidence that audit and feedback can effectively improve quality of care, but little evidence of progress in the field. There are opportunity costs for patients, providers, and health care systems when investigators test quality improvement interventions that do not build upon, or contribute toward, extant knowledge.
doi:10.1007/s11606-014-2913-y
PMCID: PMC4238192  PMID: 24965281
audit and feedback; scientific progress; quality improvement; systematic review; cumulative analysis
3.  Threats to Validity in the Design and Conduct of Preclinical Efficacy Studies: A Systematic Review of Guidelines for In Vivo Animal Experiments 
PLoS Medicine  2013;10(7):e1001489.
Background
The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address.
Methods and Findings
We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria—most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation.
Conclusions
By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The development process for new drugs is lengthy and complex. It begins in the laboratory, where scientists investigate the causes of diseases and identify potential new treatments. Next, promising interventions undergo preclinical research in cells and in animals (in vivo animal experiments) to test whether the intervention has the expected effect and to support the generalization (extension) of this treatment–effect relationship to patients. Drugs that pass these tests then enter clinical trials, where their safety and efficacy is tested in selected groups of patients under strictly controlled conditions. Finally, the government bodies responsible for drug approval review the results of the clinical trials, and successful drugs receive a marketing license, usually a decade or more after the initial laboratory work. Notably, only 11% of agents that enter clinical testing (investigational drugs) are ultimately licensed.
Why Was This Study Done?
The frequent failure of investigational drugs during clinical translation is potentially harmful to trial participants. Moreover, the costs of these failures are passed onto healthcare systems in the form of higher drug prices. It would be good, therefore, to reduce the attrition rate of investigational drugs. One possible explanation for the dismal success rate of clinical translation is that preclinical research, the key resource for justifying clinical development, is flawed. To address this possibility, several groups of preclinical researchers have issued guidelines intended to improve the design and execution of in vivo animal studies. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the authors identify the experimental practices that are commonly recommended in these guidelines and organize these recommendations according to the type of threat to validity (internal, construct, or external) that they address. Internal threats to validity are factors that confound reliable inferences about treatment–effect relationships in preclinical research. For example, experimenter expectation may bias outcome assessment. Construct threats to validity arise when researchers mischaracterize the relationship between an experimental system and the clinical disease it is intended to represent. For example, researchers may use an animal model for a complex multifaceted clinical disease that only includes one characteristic of the disease. External threats to validity are unseen factors that frustrate the transfer of treatment–effect relationships from animal models to patients.
What Did the Researchers Do and Find?
The researchers identified 26 preclinical guidelines that met their predefined eligibility criteria. Twelve guidelines addressed preclinical research for neurological and cerebrovascular drug development; other disorders covered by guidelines included cardiac and circulatory disorders, sepsis, pain, and arthritis. Together, the guidelines offered 55 different recommendations for the design and execution of preclinical in vivo animal studies. Nineteen recommendations addressed threats to internal validity. The most commonly included recommendations of this type called for the use of power calculations to ensure that sample sizes are large enough to yield statistically meaningful results, random allocation of animals to treatment groups, and “blinding” of researchers who assess outcomes to treatment allocation. Among the 25 recommendations that addressed threats to construct validity, the most commonly included recommendations called for characterization of the properties of the animal model before experimentation and matching of the animal model to the human manifestation of the disease. Finally, six recommendations addressed threats to external validity. The most commonly included of these recommendations suggested that preclinical research should be replicated in different models of the same disease and in different species, and should also be replicated independently.
What Do These Findings Mean?
This systematic review identifies a range of investigational recommendations that preclinical researchers believe address threats to the validity of preclinical efficacy studies. Many of these recommendations are not widely implemented in preclinical research at present. Whether the failure to implement them explains the frequent discordance between the results on drug safety and efficacy obtained in preclinical research and in clinical trials is currently unclear. These findings provide a starting point, however, for the improvement of existing preclinical research guidelines for specific diseases, and for the development of similar guidelines for other diseases. They also provide an evidence-based platform for the analysis of preclinical evidence and for the study and evaluation of preclinical research practice. These findings should, therefore, be considered by investigators, institutional review bodies, journals, and funding agents when designing, evaluating, and sponsoring translational research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001489.
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health professionals; its Patient Network provides a step-by-step description of the drug development process that includes information on preclinical research
The UK Medicines and Healthcare Products Regulatory Agency (MHRA) provides information about all aspects of the scientific evaluation and approval of new medicines in the UK; its My Medicine: From Laboratory to Pharmacy Shelf web pages describe the drug development process from scientific discovery, through preclinical and clinical research, to licensing and ongoing monitoring
The STREAM website provides ongoing information about policy, ethics, and practices used in clinical translation of new drugs
The CAMARADES collaboration offers a “supporting framework for groups involved in the systematic review of animal studies” in stroke and other neurological diseases
doi:10.1371/journal.pmed.1001489
PMCID: PMC3720257  PMID: 23935460
4.  Evaluation of a Theory-Informed Implementation Intervention for the Management of Acute Low Back Pain in General Medical Practice: The IMPLEMENT Cluster Randomised Trial 
PLoS ONE  2013;8(6):e65471.
Introduction
This cluster randomised trial evaluated an intervention to decrease x-ray referrals and increase giving advice to stay active for people with acute low back pain (LBP) in general practice.
Methods
General practices were randomised to either access to a guideline for acute LBP (control) or facilitated interactive workshops (intervention). We measured behavioural predictors (e.g. knowledge, attitudes and intentions) and fear avoidance beliefs. We were unable to recruit sufficient patients to measure our original primary outcomes so we introduced other outcomes measured at the general practitioner (GP) level: behavioural simulation (clinical decision about vignettes) and rates of x-ray and CT-scan (medical administrative data). All those not involved in the delivery of the intervention were blinded to allocation.
Results
47 practices (53 GPs) were randomised to the control and 45 practices (59 GPs) to the intervention. The number of GPs available for analysis at 12 months varied by outcome due to missing confounder information; a minimum of 38 GPs were available from the intervention group, and a minimum of 40 GPs from the control group. For the behavioural constructs, although effect estimates were small, the intervention group GPs had greater intention of practising consistent with the guideline for the clinical behaviour of x-ray referral. For behavioural simulation, intervention group GPs were more likely to adhere to guideline recommendations about x-ray (OR 1.76, 95%CI 1.01, 3.05) and more likely to give advice to stay active (OR 4.49, 95%CI 1.90 to 10.60). Imaging referral was not statistically significantly different between groups and the potential importance of effects was unclear; rate ratio 0.87 (95%CI 0.68, 1.10) for x-ray or CT-scan.
Conclusions
The intervention led to small changes in GP intention to practice in a manner that is consistent with an evidence-based guideline, but it did not result in statistically significant changes in actual behaviour.
Trial Registration
Australian New Zealand Clinical Trials Registry ACTRN012606000098538
doi:10.1371/journal.pone.0065471
PMCID: PMC3681882  PMID: 23785427
5.  Correction: Diabetes Care Provision in UK Primary Care Practices 
PLoS ONE  2012;7(8):10.1371/annotation/1957ad3b-e192-4faa-bf4c-5dce22c5560e.
doi:10.1371/annotation/1957ad3b-e192-4faa-bf4c-5dce22c5560e
PMCID: PMC3414526
6.  Diabetes Care Provision in UK Primary Care Practices 
PLoS ONE  2012;7(7):e41562.
Background
Although most people with Type 2 diabetes receive their diabetes care in primary care, only a limited amount is known about the quality of diabetes care in this setting. We investigated the provision and receipt of diabetes care delivered in UK primary care.
Methods
Postal surveys with all healthcare professionals and a random sample of 100 patients with Type 2 diabetes from 99 UK primary care practices.
Results
326/361 (90.3%) doctors, 163/186 (87.6%) nurses and 3591 patients (41.8%) returned a questionnaire. Clinicians reported giving advice about lifestyle behaviours (e.g. 88% would routinely advise about calorie restriction; 99.6% about increasing exercise) more often than patients reported having received it (43% and 42%) and correlations between clinician and patient report were low. Patients’ reported levels of confidence about managing their diabetes were moderately high; a median (range) of 21% (3% to 39%) of patients reporting being not confident about various areas of diabetes self-management.
Conclusions
Primary care practices have organisational structures in place and are, as judged by routine quality indicators, delivering high quality care. There remain evidence-practice gaps in the care provided and in the self confidence that patients have for key aspects of self management and further research is needed to address these issues. Future research should use robust designs and appropriately designed studies to investigate how best to improve this situation.
doi:10.1371/journal.pone.0041562
PMCID: PMC3408463  PMID: 22859997
7.  Strengthening the Reporting of Genetic Risk Prediction Studies (GRIPS): Explanation and Elaboration 
European journal of epidemiology  2011;26(4):313-337.
The rapid and continuing progress in gene discovery for complex diseases is fuelling interest in the potential application of genetic risk models for clinical and public health practice.The number of studies assessing the predictive ability is steadily increasing, but they vary widely in completeness of reporting and apparent quality.Transparent reporting of the strengths and weaknesses of these studies is important to facilitate the accumulation of evidence on genetic risk prediction.A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by prior reporting guidelines.These recommendations aim to enhance the transparency, quality and completeness of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct or analysis.
doi:10.1007/s10654-011-9551-z
PMCID: PMC3088812  PMID: 21424820
8.  Strengthening the reporting of genetic risk prediction studies (GRIPS): explanation and elaboration 
European Journal of Epidemiology  2011;26(4):313-337.
The rapid and continuing progress in gene discovery for complex diseases is fuelling interest in the potential application of genetic risk models for clinical and public health practice. The number of studies assessing the predictive ability is steadily increasing, but they vary widely in completeness of reporting and apparent quality. Transparent reporting of the strengths and weaknesses of these studies is important to facilitate the accumulation of evidence on genetic risk prediction. A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by prior reporting guidelines. These recommendations aim to enhance the transparency, quality and completeness of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct or analysis.
doi:10.1007/s10654-011-9551-z
PMCID: PMC3088812  PMID: 21424820
Genetic; Risk prediction; Methodology; Guidelines; Reporting
9.  A prospective cluster-randomized trial to implement the Canadian CT Head Rule in emergency departments 
Background
The Canadian CT Head Rule was developed to allow physicians to be more selective when ordering computed tomography (CT) imaging for patients with minor head injury. We sought to evaluate the effectiveness of implementing this validated decision rule at multiple emergency departments.
Methods
We conducted a matched-pair cluster-randomized trial that compared the outcomes of 4531 patients with minor head injury during two 12-month periods (before and after) at hospital emergency departments in Canada, six of which were randomly allocated as intervention sites and six as control sites. At the intervention sites, active strategies, including education, changes to policy and real-time reminders on radiologic requisitions were used to implement the Canadian CT Head Rule. The main outcome measure was referral for CT scan of the head.
Results
Baseline characteristics of patients were similar when comparing control to intervention sites. At the intervention sites, the proportion of patients referred for CT imaging increased from the “before” period (62.8%) to the “after” period (76.2%) (difference +13.3%, 95% CI 9.7%–17.0%). At the control sites, the proportion of CT imaging usage also increased, from 67.5% to 74.1% (difference +6.7%, 95% CI 2.6%–10.8%). The change in mean imaging rates from the “before” period to the “after” period for intervention versus control hospitals was not significant (p = 0.16). There were no missed brain injuries or adverse outcomes.
Interpretation
Our knowledge–translation-based trial of the Canadian CT Head Rule did not reduce rates of CT imaging in Canadian emergency departments. Future studies should identify strategies to deal with barriers to implementation of this decision rule and explore more effective approaches to knowledge translation. (ClinicalTrials.gov trial register no. NCT00993252)
doi:10.1503/cmaj.091974
PMCID: PMC2950184  PMID: 20732978
10.  External Validation of a Measurement Tool to Assess Systematic Reviews (AMSTAR) 
PLoS ONE  2007;2(12):e1350.
Background
Thousands of systematic reviews have been conducted in all areas of health care. However, the methodological quality of these reviews is variable and should routinely be appraised. AMSTAR is a measurement tool to assess systematic reviews.
Methodology
AMSTAR was used to appraise 42 reviews focusing on therapies to treat gastro-esophageal reflux disease, peptic ulcer disease, and other acid-related diseases. Two assessors applied the AMSTAR to each review. Two other assessors, plus a clinician and/or methodologist applied a global assessment to each review independently.
Conclusions
The sample of 42 reviews covered a wide range of methodological quality. The overall scores on AMSTAR ranged from 0 to 10 (out of a maximum of 11) with a mean of 4.6 (95% CI: 3.7 to 5.6) and median 4.0 (range 2.0 to 6.0). The inter-observer agreement of the individual items ranged from moderate to almost perfect agreement. Nine items scored a kappa of >0.75 (95% CI: 0.55 to 0.96). The reliability of the total AMSTAR score was excellent: kappa 0.84 (95% CI: 0.67 to 1.00) and Pearson's R 0.96 (95% CI: 0.92 to 0.98). The overall scores for the global assessment ranged from 2 to 7 (out of a maximum score of 7) with a mean of 4.43 (95% CI: 3.6 to 5.3) and median 4.0 (range 2.25 to 5.75). The agreement was lower with a kappa of 0.63 (95% CI: 0.40 to 0.88). Construct validity was shown by AMSTAR convergence with the results of the global assessment: Pearson's R 0.72 (95% CI: 0.53 to 0.84). For the AMSTAR total score, the limits of agreement were −0.19±1.38. This translates to a minimum detectable difference between reviews of 0.64 ‘AMSTAR points’. Further validation of AMSTAR is needed to assess its validity, reliability and perceived utility by appraisers and end users of reviews across a broader range of systematic reviews.
doi:10.1371/journal.pone.0001350
PMCID: PMC2131785  PMID: 18159233

Results 1-11 (11)