Burnout is prevalent in residency training and practice and is linked to medical error and suboptimal patient care. However, little is known about how burnout affects clinical reasoning, which is essential to safe and effective care. The aim of this study was to examine how burnout modulates brain activity during clinical reasoning in physicians. Using functional Magnetic Resonance Imaging (fMRI), brain activity was assessed in internal medicine residents (n = 10) and board-certified internists (faculty, n = 17) from the Uniformed Services University (USUHS) while they answered and reflected upon United States Medical Licensing Examination and American Board of Internal Medicine multiple-choice questions. Participants also completed a validated two-item burnout scale, which includes an item assessing emotional exhaustion and an item assessing depersonalization. Whole brain covariate analysis was used to examine blood-oxygen-level-dependent (BOLD) signal during answering and reflecting upon clinical problems with respect to burnout scores. Higher depersonalization scores were associated with less BOLD signal in the right dorsolateral prefrontal cortex and middle frontal gyrus during reflecting on clinical problems and less BOLD signal in the bilateral precuneus while answering clinical problems in residents. Higher emotional exhaustion scores were associated with more right posterior cingulate cortex and middle frontal gyrus BOLD signal in residents. Examination of faculty revealed no significant influence of burnout on brain activity. Residents appear to be more susceptible to burnout effects on clinical reasoning, which may indicate that residents may need both cognitive and emotional support to improve quality of life and to optimize performance and learning. These results inform our understanding of mental stress, cognitive control as well as cognitive load theory.
expertise; burnout; clinical reasoning; cognitive load; fMRI
To describe parents’ and adolescents’ perceptions about vaccination.
Qualitative interviews of 22 mothers/grandmothers and 25 10- to 14-year-olds.
Themes emerged in 3 focus areas. (a) Understanding: Both adults and adolescents had difficulty understanding concepts of risks, benefits, prevention, and vaccination. (b) Decision making: Adults saw vaccination as an opportunity to help their adolescent develop skills for transition to adulthood. Adolescents worried about being lied to (reinforced by being told “it won’t hurt”), physical pain, and cleanliness. (c) Preventing sexually transmitted infections: Adults were divided between those who felt their child would not need such a vaccine and those who wanted to “be safe” to protect their child in the future.
At the same time that even basic concepts about vaccination should be explained to both adults and adolescents, adolescence represents a time for learning about responsible decision making. Discussion regarding the risks and benefits of vaccines can be part of transitioning to adult decision making.
Maintenance of certification examination performance is associated with quality of care. We aimed to examine relationships between electronic medical knowledge resource use, practice characteristics and examination scores among physicians recertifying in internal medicine.
We conducted a cross-sectional study of 3,958 United States physicians who took the Internal Medicine Maintenance of Certification Examination (IM–MOCE) between January 1, 2006 and December 31, 2008, and who held individual licenses to one or both of two large electronic knowledge resource programs. We examined associations between physicians’ IM–MOCE scores and their days of electronic resource use, practice type (private practice, residency teaching clinic, inpatient, nursing home), practice model (single or multi-specialty), sex, age, and medical school location.
In the 365 days prior to the IM–MOCE, physicians used electronic resources on a mean (SD, range) of 20.3 (36.5, 0–265) days. In multivariate analyses, the number of days of resource use was independently associated with increased IM–MOCE scores (0.07-point increase per day of use, p = 0.02). Increased age was associated with decreased IM–MOCE scores (1.8-point decrease per year of age, p < 0.001). Relative to physicians working in private practice settings, physicians working in residency teaching clinics and hospital inpatient practices had higher IM–MOCE scores by 29.1 and 20.0 points, respectively (both p < 0.001).
Frequent use of electronic resources was associated with modestly enhanced IM–MOCE performance. Physicians involved in residency education clinics and hospital inpatient practices had higher IM–MOCE scores than physicians working in private practice settings.
Internal Medicine Maintenance of Certification Examination; IM–MOCE; scores; education
Quality improvement (QI) activities are an important part of residency training. National studies are needed to inform best practices in QI training and experience for residents. The impact of the Institutional Review Board (IRB) process on such studies is not well described.
This observational study looked at time, length, comfort level, and overall quality of experience for 42 residency training programs in obtaining approval or exemption for a nationally based educational QI study.
For the 42 programs in the study, the time period to IRB approval/exemption was highly variable, ranging from less than 1 week to 56.5 weeks; mean and median time was approximately 18 weeks (SD, 10.8). Greater reported comfort with the IRB process was associated with less time to obtain approval (r = −.50; P < .01; 95% CI, −0.70 to −0.23). A more positive overall quality of experience with the IRB process was also associated with less time to obtain IRB approval (r = −.60; P < .01; 95% CI, −0.74 to −0.36).
The IRB process for residency programs initiating QI studies shows considerable variance that is not explained by attributes of the projects. New strategies are needed to assist and expedite IRB processes for QI research in educational settings and reduce interinstitutional variability and increase comfort level among educators with the IRB process.
To investigate the feasibility, reliability, and validity of comprehensively assessing physician-level performance in ambulatory practice.
Data Sources/Study Setting
Ambulatory-based general internists in 13 states participated in the assessment.
We assessed physician-level performance, adjusted for patient factors, on 46 individual measures, an overall composite measure, and composite measures for chronic, acute, and preventive care. Between- versus within-physician variation was quantified by intraclass correlation coefficients (ICC). External validity was assessed by correlating performance on a certification exam.
Data Collection/Extraction Methods
Medical records for 236 physicians were audited for seven chronic and four acute care conditions, and six age- and gender-appropriate preventive services.
Performance on the individual and composite measures varied substantially within (range 5–86 percent compliance on 46 measures) and between physicians (ICC range 0.12–0.88). Reliabilities for the composite measures were robust: 0.88 for chronic care and 0.87 for preventive services. Higher certification exam scores were associated with better performance on the overall (r = 0.19; p <.01), chronic care (r = 0.14, p = .04), and preventive services composites (r = 0.17, p = .01).
Our results suggest that reliable and valid comprehensive assessment of the quality of chronic and preventive care can be achieved by creating composite measures and by sampling feasible numbers of patients for each condition.
Comprehensive assessment; quality of care; primary care; composite measures
The Internal Medicine In-Training Exam (IM-ITE) assesses the content knowledge of internal medicine trainees. Many programs use the IM-ITE to counsel residents, to create individual remediation plans, and to make fundamental programmatic and curricular modifications.
To assess the association between a multiple-choice testing program administered during 12 consecutive months of ambulatory and inpatient elective experience and IM-ITE percentile scores in third post-graduate year (PGY-3) categorical residents.
Retrospective cohort study.
One hundred and four categorical internal medicine residents. Forty-five residents in the 2008 and 2009 classes participated in the study group, and the 59 residents in the three classes that preceded the use of the testing program, 2005–2007, served as controls.
A comprehensive, elective rotation specific, multiple-choice testing program and a separate board review program, both administered during a continuous long-block elective experience during the twelve months between the second post-graduate year (PGY-2) and PGY-3 in-training examinations.
We analyzed the change in median individual percent correct and percentile scores between the PGY-1 and PGY-2 IM-ITE and between the PGY-2 and PGY-3 IM-ITE in both control and study cohorts. For our main outcome measure, we compared the change in median individual percentile rank between the control and study cohorts between the PGY-2 and the PGY-3 IM-ITE testing opportunities.
After experiencing the educational intervention, the study group demonstrated a significant increase in median individual IM-ITE percentile score between PGY-2 and PGY-3 examinations of 8.5 percentile points (p < 0.01). This is significantly better than the increase of 1.0 percentile point seen in the control group between its PGY-2 and PGY-3 examination (p < 0.01).
A comprehensive multiple-choice testing program aimed at PGY-2 residents during a 12-month continuous long-block elective experience is associated with improved PGY-3 IM-ITE performance.
Internal Medicine In-Training Exam; multiple-choice testing; medical knowledge
To examine attitudes and knowledge about vaccinations in postpartum mothers.
This cross-sectional study collected data via written survey to postpartum mothers in a large teaching hospital in Connecticut. We used multivariable analysis to identify mothers who were less trusting with regard to vaccines.
Of 228 mothers who participated in the study, 29% of mothers worried about vaccinating their infants: 23% were worried the vaccines would not work, 11% were worried the doctor would give the wrong vaccine, and 8% worried that “they” are experimenting when they give vaccines. Mothers reported that the most important reasons to vaccinate were to prevent disease in the baby (74%) and in society (11%). Knowledge about vaccination was poor; e.g., 33% correctly matched chicken pox with varicella vaccine. Mothers who were planning to breastfeed (P = .01), were primiparous (P = .01), or had an income <$40,000 but did not receive support from the women, infants, and children (WIC) program were less trusting with regard to vaccines (P = .03). Although 70% wanted information about vaccines during pregnancy, only 18% reported receiving such information during prenatal care.
Although the majority of infants receive vaccines, their mothers have concerns and would like to receive immunization information earlier. Mothers who are primiparous, have low family incomes but do not qualify for the WIC program, or are breastfeeding may need special attention to develop a trusting relationship around vaccination. Mothers would benefit from additional knowledge regarding the risks and benefits of vaccines particularly during prenatal care.
Vaccinations; Mothers; Attitudes; Trust; Postpartum period
Assessing physicians’ clinical performance using statistically sound, evidence-based measures is challenging. Little research has focused on methodological approaches to setting performance standards to which physicians are being held accountable.
Determine if a rigorous approach for setting an objective, credible standard of minimally-acceptable performance could be used for practicing physicians caring for diabetic patients.
Retrospective cohort study.
Nine hundred and fifty-seven physicians from the United States with time-limited certification in internal medicine or a subspecialty.
The ABIM Diabetes Practice Improvement Module was used to collect data on ten clinical and two patient experience measures. A panel of eight internists/subspecialists representing essential perspectives of clinical practice applied an adaptation of the Angoff method to judge how physicians who provide minimally-acceptable care would perform on individual measures to establish performance thresholds. Panelists then rated each measure’s relative importance and the Dunn–Rankin method was applied to establish scoring weights for the composite measure. Physician characteristics were used to support the standard-setting outcome.
Physicians abstracted 20,131 patient charts and 18,974 patient surveys were completed. The panel established reasonable performance thresholds and importance weights, yielding a standard of 48.51 (out of 100 possible points) on the composite measure with high classification accuracy (0.98). The 38 (4%) outlier physicians who did not meet the standard had lower ratings of overall clinical competence and professional behavior/attitude from former residency program directors (p = 0.01 and p = 0.006, respectively), lower Internal Medicine certification and maintenance of certification examination scores (p = 0.005 and p < 0.001, respectively), and primarily worked as solo practitioners (p = 0.02).
The standard-setting method yielded a credible, defensible performance standard for diabetes care based on informed judgment that resulted in a reasonable, reproducible outcome. Our method represents one approach to identifying outlier physicians for intervention to protect patients.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-010-1572-x) contains supplementary material, which is available to authorized users.
clinical performance assessment; standard setting; composite measures; diabetes care
Many have called for ambulatory training redesign in internal medicine (IM) residencies to increase primary care career outcomes. Many believe dysfunctional, clinic environments are a key barrier to meaningful ambulatory education, but little is actually known about the educational milieu of continuity clinics nationwide.
We wished to describe the infrastructure and educational milieu at resident continuity clinics and assess clinic readiness to meet new IM-RRC requirements.
National survey of ACGME accredited IM training programs.
Directors of academic and community-based continuity clinics.
Two hundred and twenty-one out of 365 (62%) of clinic directors representing 49% of training programs responded. Wide variation amongst continuity clinics in size, structure and educational organization exist. Clinics below the 25th percentile of total clinic sessions would not meet RRC-IM requirements for total number of clinic sessions. Only two thirds of clinics provided a longitudinal mentor. Forty-three percent of directors reported their trainees felt stressed in the clinic environment and 25% of clinic directors felt overwhelmed.
The survey used self reported data and was not anonymous. A slight predominance of larger clinics and university based clinics responded. Data may not reflect changes to programs made since 2008.
This national survey demonstrates that the continuity clinic experience varies widely across IM programs, with many sites not yet meeting new ACGME requirements. The combination of disadvantaged and ill patients with inadequately resourced clinics, stressed residents, and clinic directors suggests that many sites need substantial reorganization and institutional commitment.New paradigms, encouraged by ACGME requirement changes such as increased separation of inpatient and outpatient duties are needed to improve the continuity clinic experience.
clinic; resident education; ACGME; primary care
Self-appraisal has repeatedly been shown to be inadequate as a mechanism for performance improvement. This has placed greater emphasis on understanding the processes through which self-perception and external feedback interact to influence professional development. As feedback is inevitably interpreted through the lens of one’s self-perceptions it is important to understand how learners interpret, accept, and use feedback (or not) and the factors that influence those interpretations. 134 participants from 8 health professional training/continuing competence programs were recruited to participate in focus groups. Analyses were designed to (a) elicit understandings of the processes used by learners and physicians to interpret, accept and use (or not) data to inform their perceptions of their clinical performance, and (b) further understand the factors (internal and external) believed to influence interpretation of feedback. Multiple influences appear to impact upon the interpretation and uptake of feedback. These include confidence, experience, and fear of not appearing knowledgeable. Importantly, however, each could have a paradoxical effect of both increasing and decreasing receptivity. Less prevalent but nonetheless important themes suggested mechanisms through which cognitive reasoning processes might impede growth from formative feedback. Many studies have examined the effectiveness of feedback through variable interventions focused on feedback delivery. This study suggests that it is equally important to consider feedback from the perspective of how it is received. The interplay observed between fear, confidence, and reasoning processes reinforces the notion that there is no simple recipe for the delivery of effective feedback. These factors should be taken into account when trying to understand (a) why self-appraisal can be flawed, (b) why appropriate external feedback is vital (yet can be ineffective), and (c) why we may need to disentangle the goals of performance improvement from the goals of improving self-assessment.
Self-appraisal; Feedback; Confidence; Self-assessment; Performance improvement
The Accreditation Council for Graduate Medical Education (ACGME) Outcome Project requires that residency program directors objectively document that their residents achieve competence in 6 general dimensions of practice.
In November 2007, the American Board of Internal Medicine (ABIM) and the ACGME initiated the development of milestones for internal medicine residency training. ABIM and ACGME convened a 33-member milestones task force made up of program directors, experts in evaluation and quality, and representatives of internal medicine stakeholder organizations. This article reports on the development process and the resulting list of proposed milestones for each ACGME competency.
The task force adopted the Dreyfus model of skill acquisition as a framework the internal medicine milestones, and calibrated the milestones with the expectation that residents achieve, at a minimum, the “competency” level in the 5-step progression by the completion of residency. The task force also developed general recommendations for strategies to evaluate the milestones.
The milestones resulting from this effort will promote competency-based resident education in internal medicine, and will allow program directors to track the progress of residents and inform decisions regarding promotion and readiness for independent practice. In addition, the milestones may guide curriculum development, suggest specific assessment strategies, provide benchmarks for resident self-directed assessment-seeking, and assist remediation by facilitating identification of specific deficits. Finally, by making explicit the profession's expectations for graduates and providing a degree of national standardization in evaluation, the milestones may improve public accountability for residency training.
The Outcome Project requires high-quality assessment approaches to provide reliable and valid judgments of the attainment of competencies deemed important for physician practice.
The Accreditation Council for Graduate Medical Education (ACGME) convened the Advisory Committee on Educational Outcome Assessment in 2007–2008 to identify high-quality assessment methods. The assessments selected by this body would form a core set that could be used by all programs in a specialty to assess resident performance and enable initial steps toward establishing national specialty databases of program performance. The committee identified a small set of methods for provisional use and further evaluation. It also developed frameworks and processes to support the ongoing evaluation of methods and the longer-term enhancement of assessment in graduate medical education.
The committee constructed a set of standards, a methodology for applying the standards, and grading rules for their review of assessment method quality. It developed a simple report card for displaying grades on each standard and an overall grade for each method reviewed. It also described an assessment system of factors that influence assessment quality. The committee proposed a coordinated, national-level infrastructure to support enhancements to assessment, including method development and assessor training. It recommended the establishment of a new assessment review group to continue its work of evaluating assessment methods. The committee delivered a report summarizing its activities and 5 related recommendations for implementation to the ACGME Board in September 2008.
There are no nationwide data on the methods residency programs are using to assess trainee competence. The Accreditation Council for Graduate Medical Education (ACGME) has recommended tools that programs can use to evaluate their trainees. It is unknown if programs are adhering to these recommendations.
To describe evaluation methods used by our nation’s internal medicine residency programs and assess adherence to ACGME methodological recommendations for evaluation.
All internal medicine programs registered with the Association of Program Directors of Internal Medicine (APDIM).
Descriptive statistics of programs and tools used to evaluate competence; compliance with ACGME recommended evaluative methods.
The response rate was 70%. Programs were using an average of 4.2–6.0 tools to evaluate their trainees with heavy reliance on rating forms. Direct observation and practice and data-based tools were used much less frequently. Most programs were using at least 1 of the Accreditation Council for Graduate Medical Education (ACGME)’s “most desirable” methods of evaluation for all 6 measures of trainee competence. These programs had higher support staff to resident ratios than programs using less desirable evaluative methods.
Residency programs are using a large number and variety of tools for evaluating the competence of their trainees. Most are complying with ACGME recommended methods of evaluation especially if the support staff to resident ratio is high.
graduate medical education; residency; ACGME; competency
Few studies have systematically and rigorously examined the quality of care provided in educational practice sites.
The objectives of this study were to (1) describe the patient population cared for by trainees in internal medicine residency clinics; (2) assess the quality of preventive cardiology care provided to these patients; (3) characterize the practice-based systems that currently exist in internal medicine residency clinics; and (4) examine the relationships between quality, practice-based systems, and features of the program: size, type of program, and presence of an electronic medical record.
This is a cross-sectional observational study.
This study was conducted in 15 Internal Medicine residency programs (23 sites) throughout the USA.
The participants included site champions at residency programs and 709 residents.
Abstracted charts provided data about patient demographics, coronary heart disease risk factors, processes of care, and clinical outcomes. Patients completed surveys regarding satisfaction. Site teams completed a practice systems survey.
Chart abstraction of 4,783 patients showed substantial variability across sites. On average, patients had between 3 and 4 of the 9 potential risk factors for coronary heart disease, and approximately 21% had at least 1 important barrier of care. Patients received an average of 57% (range, 30–77%) of the appropriate interventions. Reported satisfaction with care was high. Sites with an electronic medical record showed better overall information management (81% vs 27%) and better modes of communication (79% vs 43%).
This study has provided insight into the current state of practice in residency sites including aspects of the practice environment and quality of preventive cardiology care delivered. Substantial heterogeneity among the training sites exists. Continuous measurement of the quality of care provided and a better understanding of the training environment in which this care is delivered are important goals for delivering high quality patient care.
practice-based learning; systems-based practice; quality of care; preventive cardiology; Internal Medicine residency
Although the inpatient setting has served as the predominant educational site of internal medicine training programs, many changes and factors are currently affecting education in this setting. As a result, many educational organizations are calling for reforms in inpatient training. This report reviews the available literature on specific internal medicine inpatient educational interventions and proposes recommendations for improving internal medicine training in this setting.
We searched Medline for articles published between 1966 and August 2004 which focused on internal medicine training interventions in the inpatient setting; bibliographies of Medline-identified articles, as well as articles suggested by experts in the field provided additional citations. We then reviewed, classified, and abstracted only articles where an assessment of learner outcomes was included.
Thirteen studies of inpatient internal medicine educational interventions were found that included an outcome assessment. All were single institution studies. The majority of these studies was of poor methodological quality and focused on specific content areas of internal medicine. None assessed the effectiveness or impact of internal medicine core inpatient experiences or curriculum.
This review identifies significant gaps in our understanding of what constitutes effective inpatient education. The paucity of high quality research in the internal medicine inpatient setting highlights the urgent need to formally define and study what constitutes an effective “core” inpatient curriculum.
residency education; inpatient training; residency reform
To determine the independent effect of hospitalist status upon inpatient length of stay after controlling for case mix, as well as patient-level and provider-level variables such as age, years since physician medical school graduation, and volume status of provider.
Observational retrospective cohort study employing a hierarchical random intercept logistic regression model.
Tertiary-care teaching hospital.
All admissions during 2001 to the department of medicine not sent initially to the medical intensive care unit or coronary care unit.
Observed length of stay (LOS) compared to principle diagnosis related group (DRG)-specific mean LOS for hospitalist and nonhospitalist patients adjusting for patient age, gender, years since physician graduation from medical school, and physician volume status.
The 9 hospitalists discharged 2,027 patients while the nonhospitalists discharged 9,361 patients. On average, hospitalist patients were younger, 63.3 versus 73.3 years (P < .0001). Hospitalists were more recently graduated from medical school, 13.8 versus 22.5 years (P = .02). Each year of patient age was found to increase the likelihood of an above average LOS (odds ratio [OR], 1.01; 95% confidence interval [CI], 1.01 to 1.02; P < .001). In unadjusted analysis, hospitalists were less likely to have an above average LOS (OR, 0.51; 95% CI, 0.28 to 0.93; P = .03). Adjustment for effects of patient age and gender, physician gender, years since medical school graduation, and quintile of physician admission volume did not appreciably change the point estimate that hospitalist patients remained less likely to have above average LOS (OR, 0.60; 95% CI, 0.32 to 1.11; P = .11).
For a given principle DRG, hospitalist patients were less likely to exceed the average LOS than were nonhospitalist patients. This effect was rather large, in that hospitalist status reduced the likelihood of above average LOS by about 49%. Adjustment for patient age, years since physician graduation, and admission volume did not significantly alter this finding. Further research should focus on identifying specific practices that account for hospitalism's effects.
hospitalist; length of stay; patient-level variables; provider-level variables; provider volume
The 80-hour workweek limit for residents provides an opportunity for residency directors to creatively innovate their programs. Our novel day-float rotation augmented both the educational structure within the inpatient team setting and the ability for house staff to complete their work within the mandated limits. Descriptive evaluation of the rotation was performed through an end-of-rotation questionnaire. The average length of the ward residents’ work week was quantified before and after the rotation's implementation. Educational portfolios and mentored peer-teaching opportunities enriched the rotation. As measured by our evaluation, this new rotation enhanced learning and patient care while reducing work hours for inpatient ward residents.
internship and residency; workload; medical education; program evaluation
We studied the nature of feedback given after a miniCEX. We investigated whether the feedback was interactive; specifically, did the faculty allow the trainee to react to the feedback, enable self-assessment, and help trainees to develop an action plan for improvement. Finally, we investigated the number of types of recommendations given by faculty. One hundred and seven miniCEX feedback sessions were audiotaped. The faculty provided at least 1 recommendation for improvement in 80% of the feedback sessions. The majority of the sessions (61%) involved learner reaction, but in only 34% of the sessions did faculty ask for self-assessment from the intern and only 8% involved an action plan from the faculty member. Faculty are using the miniCEX to provide recommendations and often encourage learner reaction, but are underutilizing other interactive feedback methods of self-assessment and action plans. Programs should consider both specific training in feedback and changes to the miniCEX form to facilitate interactive feedback.
feedback; direct observation; evaluation
From February to April 2003, we performed an e-mail-based survey to assess responses of physicians at Yale University to being offered smallpox vaccine. Of 58 respondents, 3 (5%) had been or intended to be vaccinated. Reasons cited for declining vaccination included: belief that benefits did not outweigh risks (55%), belief that the vaccination program was unnecessary (18%), desire to wait and see what side effects occurred in vaccinees (11%), and worries about compensation or liability (7%). Most (94%) considered risks to themselves, family, or patients in their decision. Only 3% thought a smallpox attack in the next 5 years was likely or very likely. Physicians did not accept the smallpox vaccine because they did not believe the potential benefits were sufficient.
attitudes; smallpox vaccine; survey; vaccination
Incorporating clinical content into medical education faculty development programs has been proposed as a strategy to consolidate faculty continuing medical education time and enhance learning. We developed a faculty development program for ambulatory internal medicine preceptors that integrated primary care genetics with ambulatory precepting. The instructional strategies addressed both areas simultaneously and included facilitated discussions, mini-lectures, trigger tapes, and role plays. To evaluate the program, we conducted a pre-post trial. Skills were measured by retrospective pre-post self-reported ratings and behaviors by self-reported implementation of commitment to change (CTC) statements. Participants' (N = 26) ambulatory precepting and primary care genetics skill ratings improved after the intervention. They listed an average of 2.4 clinical teaching CTC statements and 2.0 clinical practice CTC statements. By 3 months after the workshop, preceptors, as a group, fully implemented 32 (38%), partially implemented 35 (41%), and failed to implement 18 (21%) CTC statements. The most common barrier to clinical teaching change was insufficient skills (8 of 25; 32%) and to clinical practice change was lack of a suitable patient (15 of 25; 60%). Integrating clinical content with clinical teaching in a faculty development workshop is feasible, can improve clinical and teaching skills, and can facilitate behavior change.
faculty development; curriculum; evaluation; integration
To improve the quality and specificity of written evaluations by faculty attendings of internal medicine residents during inpatient rotations.
Prospective randomized controlled trial.
Four hospitals: tertiary care university hospital, Veterans' Administration hospital, and two community hospitals.
Eighty-eight faculty and 157 residents from categorical and primary-care internal medicine residency training programs rotating on inpatient general medicine teams.
Focused 20-minute educational session on evaluation and feedback, accompanied by 3 by 5 reminder card and diary, given to faculty at the start of their attending month.
MEASUREMENTS AND MAIN RESULTS
Primary outcomes: 1) number of written comments from faculty specific to unique, preselected dimensions of competence; 2) number of written comments from faculty describing a specific resident behavior or providing a recommendation; and 3) resident Likert-scale ratings of the quantity and effect of feedback received from faculty. Faculty in the intervention group provided more written comments specific to defined dimensions of competence, a median of three comments per evaluation form versus two in the control group, but when adjusted for clustering by faculty, the difference was not statistically significant (P = .09). Regarding feedback, residents in the intervention group rated the quantity significantly higher (P = .04) and were significantly more likely to make changes in clinical management of patients than residents in the control group (P = .04).
A brief, focused educational intervention delivered to faculty prior to the start of a ward rotation appears to have a modest effect on faculty behavior for written evaluations and promoted higher quality feedback given to house staff.
written evaluation; residents; faculty; educational intervention; controlled trial