Randomized controlled trials (RCTs) that are inappropriately designed or executed may provide biased findings and mislead clinical practice. In view of recent interest in the treatment and prevention of thrombotic complications in cancer patients we evaluated the characteristics, risk of bias and their time trends in RCTs of anticoagulation in patients with cancer.
We conducted a comprehensive search, including a search of four electronic databases (MEDLINE, EMBASE, ISI the Web of Science, and CENTRAL) up to February 2010. We included RCTs in which the intervention and/or comparison consisted of: vitamin K antagonists, unfractionated heparin (UFH), low molecular weight heparin (LMWH), direct thrombin inhibitors or fondaparinux. We performed descriptive analyses and assessed the association between the variables of interest and the year of publication.
We included 67 RCTs with 24,071 participants. In twenty one trials (31%) DVT diagnosis was triggered by clinical suspicion; the remaining trials either screened for DVT or were unclear about their approach. 41 (61%), 22 (33%), and 11 (16%) trials respectively reported on major bleeding, minor bleeding, and thrombocytopenia. The percentages of trials satisfying risk of bias criteria were: adequate sequence generation (85%), adequate allocation concealment (61%), participants’ blinding (39%), data collectors’ blinding (44%), providers’ blinding (41%), outcome assessors’ blinding (75%), data analysts’ blinding (15%), intention to treat analysis (57%), no selective outcome reporting (12%), no stopping early for benefit (97%). The mean follow-up rate was 96%. Adequate allocation concealment and the reporting of intention to treat analysis were the only two quality criteria that improved over time.
Many RCTs of anticoagulation in patients with cancer appear to use insufficiently rigorous outcome assessment methods and to have deficiencies in key methodological features. It is not clear whether this reflects a problem in the design, conduct or the reporting of these trials, or both. Future trials should avoid the shortcomings described in this article.
China is experiencing increased health care use and expenditures, without sufficient controls to ensure quality and value. Transparent, cost-conscious and patient-centered guidelines based on the best available evidence could help establishing these quality and practice measures.
We examined how guidelines could support the Chinese health reform. Specifically, we summarized the current state of the art and related challenges in guideline development and explored possible solutions in the context of the Chinese health reform.
China currently lacks capacity for evidence-based guideline development and coordination by a central agency. Most Chinese guideline users rely on recommendations developed by professional groups that lack demonstration of transparency (including conflict of interest management and evidence synthesis) and quality. These deficiencies appear larger than in other regions of the world. In addition, misperceptions about the role of guidelines in assisting practitioners as opposed to providing rules requiring adherence, and a perception that traditional Chinese medicine (TCM) cannot be appropriately incorporated in guidelines are present.
China’s capacity could be strengthened by a central guideline agency to provide or coordinate evidence synthesis for guideline development and to oversee the work of guideline developers. China can build on what is known and work with the international community to develop methods to meet the challenges of evidence-based guideline development.
Venous thromboembolism (VTE) is a common preventable cause of mortality in hospitalized medical patients. Despite rigorous randomized trials generating strong recommendations for anticoagulant use to prevent VTE, nearly 40% of medical patients receive inappropriate thromboprophylaxis. Knowledge-translation strategies are needed to bridge this gap.
We conducted a 16-week pilot cluster randomized controlled trial (RCT) to determine the proportion of medical patients that were appropriately managed for thromboprophylaxis (according to the American College of Chest Physician guidelines) within 24 hours of admission, through the use of a multicomponent knowledge-translation intervention. Our primary goal was to determine the feasibility of conducting this study on a larger scale. The intervention comprised clinician education, a paper-based VTE risk assessment algorithm, printed physicians’ orders, and audit and feedback sessions. Medical wards at six hospitals (representing clusters) in Ontario, Canada were included; three were randomized to the multicomponent intervention and three to usual care (i.e., no active strategies for thromboprophylaxis in place). Blinding was not used.
A total of 2,611 patients (1,154 in the intervention and 1,457 in the control group) were eligible and included in the analysis. This multicomponent intervention did not lead to a significant difference in appropriate VTE prophylaxis rates between intervention and control hospitals (appropriate management rate odds ratio = 0.80; 95% confidence interval: 0.50, 1.28; p = 0.36; intra-class correlation coefficient: 0.022), and thus was not considered feasible. Major barriers to effective knowledge translation were poor attendance by clinical staff at education and feedback sessions, difficulty locating preprinted orders, and lack of involvement by clinical and administrative leaders. We identified several factors that may increase uptake of a VTE prophylaxis strategy, including local champions, support from clinical and administrative leaders, mandatory use, and a simple, clinically relevant risk assessment tool.
Hospitals allocated to our multicomponent intervention did not have a higher rate of medical inpatients appropriately managed for thromboprophylaxis than did hospitals that were not allocated to this strategy.
Thromboprophylaxis; Medical patients; Anticoagulants; Venous thromboembolism; Cluster randomization; Standard orders
Health care professionals worldwide attend courses and workshops to learn evidence-based medicine (EBM), but evidence regarding the impact of these educational interventions is conflicting and of low methodologic quality and lacks generalizability. Furthermore, little is known about determinants of success. We sought to measure the effect of EBM short courses and workshops on knowledge and to identify course and learner characteristics associated with knowledge acquisition.
Health care professionals with varying expertise in EBM participated in an international, multicentre before–after study. The intervention consisted of short courses and workshops on EBM offered in diverse settings, formats and intensities. The primary outcome measure was the score on the Berlin Questionnaire, a validated instrument measuring EBM knowledge that the participants completed before and after the course.
A total of 15 centres participated in the study and 420 learners from North America and Europe completed the study. The baseline score across courses was 7.49 points (range 3.97–10.42 points) out of a possible 15 points. The average increase in score was 1.40 points (95% confidence interval 0.48–2.31 points), which corresponded with an effect size of 0.44 standard deviation units. Greater improvement in scores was associated (in order of greatest to least magnitude) with active participation required of the learners, a separate statistics session, fewer topics, less teaching time, fewer learners per tutor, larger overall course size and smaller group size. Clinicians and learners involved in medical publishing improved their score more than other types of learners; administrators and public health professionals improved their score less. Learners who perceived themselves to have an advanced knowledge of EBM and had prior experience as an EBM tutor also showed greater improvement than those who did not.
EBM course organizers who wish to optimize knowledge gain should require learners to actively participate in the course and should consider focusing on a small number of topics, giving particular attention to statistical concepts.
Many academic medical centres have introduced strategies to assess the productivity of faculty as part of compensation schemes. We conducted a systematic review of the effects of such strategies on faculty productivity.
We searched the MEDLINE, Healthstar, Embase and PsycInfo databases from their date of inception up to October 2011. We included studies that assessed academic productivity in clinical, research, teaching and administrative activities, as well as compensation, promotion processes and satisfaction.
Of 531 full-text articles assessed for eligibility, we included 9 articles reporting on eight studies. The introduction of strategies for assessing academic productivity as part of compensation schemes resulted in increases in clinical productivity (in six of six studies) in terms of clinical revenue, the work component of relative-value units (these units are nonmonetary standard units of measure used to indicate the value of services provided), patient satisfaction and other departmentally used standards. Increases in research productivity were noted (in five of six studies) in terms of funding and publications. There was no change in teaching productivity (in two of five studies) in terms of educational output. Such strategies also resulted in increases in compensation at both individual and group levels (in three studies), with two studies reporting a change in distribution of compensation in favour of junior faculty. None of the studies assessed effects on administrative productivity or promotion processes. The overall quality of evidence was low.
Strategies introduced to assess productivity as part of a compensation scheme appeared to improve productivity in research activities and possibly improved clinical productivity, but they had no effect in the area of teaching. Compensation increased at both group and individual levels, particularly among junior faculty. Higher quality evidence about the benefits and harms of such assessment strategies is needed.
Clinical practice guidelines are one of the foundations of efforts to improve healthcare. In 1999, we authored a paper about methods to develop guidelines. Since it was published, the methods of guideline development have progressed both in terms of methods and necessary procedures and the context for guideline development has changed with the emergence of guideline clearinghouses and large scale guideline production organisations (such as the UK National Institute for Health and Clinical Excellence). It therefore seems timely to, in a series of three articles, update and extend our earlier paper. In this second paper, we discuss issues of identifying and synthesizing evidence: deciding what type of evidence and outcomes to include in guidelines; integrating values into a guideline; incorporating economic considerations; synthesis, grading, and presentation of evidence; and moving from evidence to recommendations.
Clinical practice guidelines are one of the foundations of efforts to improve health care. In 1999, we authored a paper about methods to develop guidelines. Since it was published, the methods of guideline development have progressed both in terms of methods and necessary procedures and the context for guideline development has changed with the emergence of guideline clearing houses and large scale guideline production organisations (such as the UK National Institute for Health and Clinical Excellence). It therefore seems timely to, in a series of three articles, update and extend our earlier paper. In this third paper we discuss the issues of: reviewing, reporting, and publishing guidelines; updating guidelines; and the two emerging issues of enhancing guideline implementability and how guideline developers should approach dealing with the issue of patients who will be the subject of guidelines having co-morbid conditions.
Clinical practice guidelines are one of the foundations of efforts to improve health care. In 1999, we authored a paper about methods to develop guidelines. Since it was published, the methods of guideline development have progressed both in terms of methods and necessary procedures and the context for guideline development has changed with the emergence of guideline clearing houses and large scale guideline production organisations (such as the UK National Institute for Health and Clinical Excellence). It therefore seems timely to, in a series of three articles, update and extend our earlier paper. In this first paper we discuss: the target audience(s) for guidelines and their use of guidelines; identifying topics for guidelines; guideline group composition (including consumer involvement) and the processes by which guideline groups function and the important procedural issue of managing conflicts of interest in guideline development.
Accurate diagnosis is a fundamental aspect of appropriate healthcare. However, clinicians need guidance when implementing diagnostic tests given the number of tests available and resource constraints in healthcare. Practitioners of health often feel compelled to implement recommendations in guidelines, including recommendations about the use of diagnostic tests. However, the understanding about diagnostic tests by guideline panels and the methodology for developing recommendations is far from completely explored. Therefore, we evaluated the factors that guideline developers and users need to consider for the development of implementable recommendations about diagnostic tests.
Using a critical analysis of the process, we present the results of a case study using the Grading of Recommendations Applicability, Development and Evaluation (GRADE) approach to develop a clinical practice guideline for the diagnosis of Cow Milk Allergy with the World Allergy Organization.
To ensure that guideline panels can develop informed recommendations about diagnostic tests, it appears that more emphasis needs to be placed on group processes, including question formulation, defining patient-important outcomes for diagnostic tests, and summarizing evidence. Explicit consideration of concepts of diagnosis from evidence-based medicine, such as pre-test probability and treatment threshold, is required to facilitate the work of a guideline panel and to formulate implementable recommendations.
This case study provides useful guidance for guideline developers and clinicians about what they ought to demand from clinical practice guidelines to facilitate implementation and strengthen confidence in recommendations about diagnostic tests. Applying a structured framework like the GRADE approach with its requirement for transparency in the description of the evidence and factors that influence recommendations facilitates laying out the process and decision factors that are required for the development, interpretation, and implementation of recommendations about diagnostic tests.
Guideline panellists have differing opinions on whether resource use should influence decisions on individual patients. As medical care costs rise, resource use considerations become more compelling, but panellists may find dealing with such considerations challenging
The GRADE system can be used to grade the quality of evidence and strength of recommendations for diagnostic tests or strategies. This article explains how patient-important outcomes are taken into account in this process
The GRADE system classifies recommendations made in guidelines as either strong or weak. This article explores the meaning of these descriptions and their implications for patients, clinicians, and policy makers
Guideline developers use a bewildering variety of systems to rate the quality of the evidence underlying their recommendations. Some are facile, some confused, and others sophisticated but complex
Guidelines are inconsistent in how they rate the quality of evidence and the strength of recommendations. This article explores the advantages of the GRADE system, which is increasingly being adopted by organisations worldwide
Lower urinary melatonin levels are associated with a higher risk of breast cancer in postmenopausal women. Literature for premenopausal women is scant and inconsistent.
In a prospective case–control study we measured the concentration of 6-sulphatoxymelatonin (aMT6s), in the 12-hour overnight urine of 180 premenopausal women with incident breast cancer and 683 matched controls.
In logistic regression models, the multivariate odds ratio (OR) of invasive breast cancer for women in the highest quartile of total overnight aMT6s output compared with the lowest was 1.43 [95% confidence interval (CI) = 0.83–2.45; Ptrend = 0.03]. Among current non-smokers no association was existent (OR, 1.00, 95% CI, 0.52–1.94; Ptrend = 0.29). We observed an OR of 0.68 between overnight urinary aMT6s level and breast cancer risk in women with invasive breast cancer diagnosed >2 years after urine collection and a significant inverse association in women with a breast cancer diagnosis >8 years after urine collection (OR, 0.17, 95% CI = 0.04–0.71; Ptrend = 0.01). There were no important variations in ORs by tumor stage or hormone receptor status of breast tumors.
Overall we observed a positive association between aMT6s and risk of breast cancer. However, there was some evidence to suggest that this might be driven by the influence of subclinical disease on melatonin levels, with a possible inverse association among women diagnosed further from recruitment. Thus, the influence of lagtime on the association between melatonin and breast cancer risk needs to be evaluated in further studies.
melatonin; aMT6s; premenopausal; night work; breast cancer
Overactive bladder (OAB) affects the lives of millions of people worldwide and antimuscarinics are the pharmacological treatment of choice. Meta-analyses of all currently used antimuscarinics for treating OAB found similar efficacy, making the choice dependent on their adverse event profiles. However, conventional meta-analyses often fail to quantify and compare adverse events across different drugs, dosages, formulations, and routes of administration. In addition, the assessment of the broad variety of adverse events is dissatisfying. Our aim was to compare adverse events of antimuscarinics using a network meta-analytic approach that overcomes shortcomings of conventional analyses.
Cochrane Incontinence Group Specialized Trials Register, previous systematic reviews, conference abstracts, book chapters, and reference lists of relevant articles were searched. Eligible studies included randomized controlled trials comparing at least one antimuscarinic for treating OAB with placebo or with another antimuscarinic, and adverse events as outcome measures. Two authors independently extracted data. A network meta-analytic approach was applied allowing for joint assessment of all adverse events of all currently used antimuscarinics while fully maintaining randomization.
69 trials enrolling 26′229 patients were included. Similar overall adverse event profiles were found for darifenacin, fesoterodine, transdermal oxybutynin, propiverine, solifenacin, tolterodine, and trospium chloride but not for oxybutynin orally administered when currently used starting dosages were compared.
The proposed generally applicable transparent network meta-analytic approach summarizes adverse events in an easy to grasp way allowing straightforward benchmarking of antimuscarinics for treating OAB in clinical practice. Most currently used antimuscarinics seem to be equivalent first choice drugs to start the treatment of OAB except for oral oxybutynin dosages of ≥10 mg/d which may have more unfavorable adverse event profiles.
Systematic reviews of randomized trials that include measurements of health-related quality of life potentially provide critical information for patient and clinicians facing challenging health care decisions. When, as is most often the case, individual randomized trials use different measurement instruments for the same construct (such as physical or emotional function), authors typically report differences between intervention and control in standard deviation units (so-called "standardized mean difference" or "effect size"). This approach has statistical limitations (it is influenced by the heterogeneity of the population) and is non-intuitive for decision makers. We suggest an alternative approach: reporting results in minimal important difference units (the smallest difference patients experience as important). This approach provides a potential solution to both the statistical and interpretational problems of existing methods.
In the last few years, a new non-pharmacological treatment, termed apheresis, has been developed to lessen the burden of ulcerative colitis (UC). Several methods can be used to establish treatment recommendations, but over the last decade an informal collaboration group of guideline developers, methodologists, and clinicians has developed a more sensible and transparent approach known as the Grading of Recommendations, Assessment, Development and Evaluation (GRADE). GRADE has mainly been used in clinical practice guidelines and systematic reviews. The aim of the present study is to describe the use of this approach in the development of recommendations for a new health technology, and to analyse the strengths, weaknesses, opportunities, and threats found when doing so.
A systematic review of the use of apheresis for UC treatment was performed in June 2004 and updated in May 2008. Two related clinical questions were selected, the outcomes of interest defined, and the quality of the evidence assessed. Finally, the overall quality of each question was taken into account to formulate recommendations following the GRADE approach. To evaluate this experience, a SWOT (strengths, weaknesses, opportunities and threats) analysis was performed to enable a comparison with our previous experience with the SIGN (Scottish Intercollegiate Guidelines Network) method.
Application of the GRADE approach allowed recommendations to be formulated and the method to be clarified and made more explicit and transparent. Two weak recommendations were proposed to answer to the formulated questions. Some challenges, such as the limited number of studies found for the new technology and the difficulties encountered when searching for the results for the selected outcomes, none of which are specific to GRADE, were identified. GRADE was considered to be a more time-consuming method, although it has the advantage of taking into account patient values when defining and grading the relevant outcomes, thereby avoiding any influence from literature precedents, which could be considered to be a strength of this method.
The GRADE approach could be appropriate for making the recommendation development process for Health Technology Assessment (HTA) reports more explicit, especially with regard to new technologies.
To systematically review the medical literature to assess the effect of geriatric educational games on the satisfaction, knowledge, beliefs, attitudes and behaviors of health care professionals.
We conducted a systematic review following the Cochrane Collaboration methodology including an electronic search of 10 electronic databases. We included randomized controlled trials (RCT) and controlled clinical trials (CCT) and excluded single arm studies. Population of interests included members (practitioners or students) of the health care professions. Outcomes of interests were participants' satisfaction, knowledge, beliefs, attitude, and behaviors.
We included 8 studies evaluating 5 geriatric role playing games, all conducted in United States. All studies suffered from one or more methodological limitations but the overall quality of evidence was acceptable. None of the studies assessed the effects of the games on beliefs or behaviors. None of the 8 studies reported a statistically significant difference between the 2 groups in terms of change in attitude. One study assessed the impact on knowledge and found non-statistically significant difference between the 2 groups. Two studies found levels of satisfaction among participants to be high. We did not conduct a planned meta-analysis because the included studies either reported no statistical data or reported different summary statistics.
The available evidence does not support the use of role playing interventions in geriatric medical education with the aim of improving the attitudes towards the elderly.
The evidence supporting the effectiveness of educational games in graduate medical education is limited. Anecdotal reports suggest their popularity in that setting. The objective of this study was to explore the support for and the different aspects of use of educational games in family medicine and internal medicine residency programs in the United States.
We conducted a survey of family medicine and internal medicine residency program directors in the United States. The questionnaire asked the program directors whether they supported the use of educational games, their actual use of games, and the type of games being used and the purpose of that use.
Of 434 responding program directors (52% response rate), 92% were in support of the use of games as an educational strategy, and 80% reported already using them in their programs. Jeopardy like games were the most frequently used games (78%). The use of games was equally popular in family medicine and internal medicine residency programs and popularity was inversely associated with more than 75% of residents in the program being International Medical Graduates. The percentage of program directors who reported using educational games as teaching tools, review tools, and evaluation tools were 62%, 47%, and 4% respectively.
Given a widespread use of educational games in the training of medical residents, in spite of limited evidence for efficacy, further evaluation of the best approaches to education games should be explored.
It is arguable that modification of diet, given its potential for positive health outcomes, should be widely advocated and adopted. However, food intake, as a basic human need, and its modification may be accompanied by sensations of both pleasure and despondency and may consequently affect to quality of life (QoL). Thus, the feasibility and success of dietary changes will depend, at least partly, on whether potential negative influences on QoL can be avoided. This is of particular importance in the context of dietary intervention studies and in the development of new food products to improve health and well being. Instruments to measure the impact of nutrition on quality of life in the general population, however, are few and far between. Therefore, the aim of this project was to develop an instrument for measuring QoL related to nutrition in the general population.
Methods and results
We recruited participants from the general population and followed standard methodology for quality of life instrument development (identification of population, item selection, n = 24; item reduction, n = 81; item presentation, n = 12; pretesting of questionnaire and initial validation, n = 2576; construct validation n = 128; and test-retest reliability n = 20). Of 187 initial items, 29 were selected for final presentation. Factor analysis revealed an instrument with 5 domains. The instrument demonstrated good cross-sectional divergent and convergent construct validity when correlated with scores of the 8 domains of the SF-36 (ranging from -0.078 to 0.562, 19 out of 40 tested correlations were statistically significant and 24 correlations were predicted correctly) and good test-retest reliability (intra-class correlation coefficients from 0.71 for symptoms to 0.90).
We developed and validated an instrument with 29 items across 5 domains to assess quality of life related to nutrition and other aspects of food intake. The instrument demonstrated good face and construct validity as well as good reliability. Future work will focus on the evaluation of longitudinal construct validity and responsiveness.
We conducted an Internet-based randomized trial comparing three valence framing presentations of the benefits of antihypertensive medication in preventing cardiovascular disease (CVD) for people with newly diagnosed hypertension to determine which framing presentation resulted in choices most consistent with participants' values.
Methods and Findings
In this second in a series of televised trials in cooperation with the Norwegian Broadcasting Company, adult volunteers rated the relative importance of the consequences of taking antihypertensive medication using visual analogue scales (VAS). Participants viewed information (or no information) to which they were randomized and decided whether or not to take medication. We compared positive framing over 10 years (the number escaping CVD per 1000); negative framing over 10 years (the number that will have CVD) and negative framing per year over 10 years of the effects of antihypertensive medication on the 10-year risk for CVD for a 40 year-old man with newly diagnosed hypertension without other risk factors. Finally, all participants were shown all presentations and detailed patient information about hypertension and were asked to decide again. We calculated a relative importance score (RIS) by subtracting the VAS-scores for the undesirable consequences of antihypertensive medication from the VAS-score for the benefit of CVD risk reduction. We used logistic regression to determine the association between participants' RIS and their choice. 1,528 participants completed the study. The statistically significant differences between the groups in the likelihood of choosing to take antihypertensive medication in relation to different values (RIS) increased as the RIS increased. Positively framed information lead to decisions most consistent with those made by everyone for the second, more fully informed decision. There was a statistically significant decrease in deciding to take antihypertensives on the second decision, both within groups and overall.
For decisions about taking antihypertensive medication for people with a relatively low baseline risk of CVD (70 per 1000 over 10 years), both positive and negative framing resulted in significantly more people deciding to take medication compared to what participants decided after being shown all three of the presentations.
International Standard Randomised Controlled Trial Number Register ISRCTN 33771631
To investigate prostate cancer (Pca) risk in relation to estrogen metabolism, expressed as urinary 2-hydroxyestrone (2-OHE1), 16α-hydroxyestrone (16α-OHE1) and 2-OHE1 to 16α-OHE1 ratio.
We conducted a case-control study within the Western New York Health Cohort Study (WNYHCS) from 1996 to 2001. From January 2003 through September 2004, we completed the re-call and follow-up of 1092 cohort participants. Cases (n = 26) and controls (n = 110) were matched on age, race and recruitment period according to a 1:4 ratio. We used the unconditional logistic regression to compute crude and adjusted odds ratios (OR) and 95% confident interval (CI) of Pca in relation to 2-OHE1, 16αOHE1 and 2-OHE1 to 16α-OHE1 by tertiles of urine concentrations (stored in a biorepository for an average of 4 years). We identified age, race, education and body mass index as covariates. We also conducted a systematic review of the literature which revealed no additional studies, but we pooled the results from this study with those from a previously conducted case-control study using the DerSimonian-Laird random effects method.
We observed a non-significant risk reduction in the highest tertile of 2-OHE1 (OR 0.72, 95% CI 0.25-2.10). Conversely, the odds in the highest tertile of 16α-OHE1 showed a non-significant risk increase (OR 1.76 95% CI 0.62-4.98). There was a suggestion of reduced Pca risk for men in the highest tertile of 2-OHE1 to 16α-OHE1 ratio (OR 0.56, 95% CI 0.19-1.68). The pooled estimates confirmed the association between an increased Pca risk and higher urinary levels of 16α-OHE1 (third vs. first tertile: OR 1.82, 95% CI 1.09-3.05) and the protective effect of a higher 2-OHE 1 to 16α-OHE1 ratio (third vs. first tertile: OR 0.53, 95% CI 0.31-0.90).
Our study and the pooled results provide evidence for a differential role of the estrogen hydroxylation pathway in Pca development and encourage further study.
Teaching the content of clinical practice guidelines (CPGs) is important to both clinical care and graduate medical education. The objective of this study was to determine the characteristics of curricula for teaching the content of CPGs in family medicine and internal medicine residency programs in the United States.
We surveyed the directors of family medicine and internal medicine residency programs in the United States. The questionnaire included questions about the characteristics of the teaching of CPGs: goals and objectives, educational activities, evaluation, aspects of CPGs that the program teaches, the methods of making texts of CPGs available to residents, and the major barriers to teaching CPGs.
Of 434 programs responding (out of 839, 52%), 14% percent reported having written goals and objectives related to teaching CPGs. The most frequently taught aspect was the content of specific CPGs (76%). The top two educational strategies used were didactic sessions (76%) and journal clubs (64%). Auditing for adherence by residents was the primary evaluation strategy (44%), although 36% of program directors conducted no evaluation. Programs made texts of CPGs available to residents most commonly in the form of paper copies (54%) while the most important barrier was time constraints on faculty (56%).
Residency programs teach different aspects of CPGs to varying degrees, and the majority uses educational strategies not supported by research evidence.