PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (809140)

Clipboard (0)
None

Related Articles

1.  Impact of subspecialty elective exposures on outcomes on the American board of internal medicine certification examination 
BMC Medical Education  2012;12:94.
Background
The American Board of Internal Medicine Certification Examination (ABIM-CE) is one of several methods used to assess medical knowledge, an Accreditation Council for Graduate Medical Education (ACGME) core competency for graduating internal medicine residents. With recent changes in graduate medical education program directors and internal medicine residents are seeking evidence to guide decisions regarding residency elective choices. Prior studies have shown that formalized elective curricula improve subspecialty ABIM-CE scores. The primary aim of this study was to evaluate whether the number of subspecialty elective exposures or the specific subspecialties which residents complete electives in impact ABIM-CE scores.
Methods
ABIM-CE scores, elective exposures and demographic characteristics were collected for MedStar Georgetown University Hospital internal medicine residents who were first-time takers of the ABIM-CE in 2006–2010 (n=152). Elective exposures were defined as a two-week period assigned to the respective subspecialty. ABIM-CE score was analyzed using the difference between the ABIM-CE score and the standardized passing score (delta-SPS). Subspecialty scores were analyzed using percentage of correct responses. Data was analyzed using GraphPad Prism version 5.00 for Windows.
Results
Paired elective exposure and ABIM-CE scores were available in 131 residents. There was no linear correlation between ABIM-CE mean delta-SPS and the total number of electives or the number of unique elective exposures. Residents with ≤14 elective exposures had higher ABIM-CE mean delta-SPS than those with ≥15 elective exposures (143.4 compared to 129.7, p=0.051). Repeated electives in individual subspecialties were not associated with significant difference in mean ABIM-CE delta-SPS.
Conclusions
This study did not demonstrate significant positive associations between individual subspecialty elective exposures and ABIM-CE mean delta-SPS score. Residents with ≤14 elective exposures had higher ABIM-CE mean delta-SPS than those with ≥15 elective exposures suggesting there may be an “ideal” number of elective exposures that supports improved ABIM-CE performance. Repeated elective exposures in an individual specialty did not correlate with overall or subspecialty ABIM-CE performance.
doi:10.1186/1472-6920-12-94
PMCID: PMC3480921  PMID: 23057635
Resident education; Gender; Elective; Subspecialty; Graduate medical education
2.  A nomogram to predict the probability of passing the American Board of Internal Medicine examination 
Medical Education Online  2012;17:10.3402/meo.v17i0.18810.
Background
Although the American Board of Internal Medicine (ABIM) certification is valued as a reflection of physicians’ experience, education, and expertise, limited methods exist to predict performance in the examination.
Purpose
The objective of this study was to develop and validate a predictive tool based on variables common to all residency programs, regarding the probability of an internal medicine graduate passing the ABIM certification examination.
Methods
The development cohort was obtained from the files of the Cleveland Clinic internal medicine residents who began training between 2004 and 2008. A multivariable logistic regression model was built to predict the ABIM passing rate. The model was represented as a nomogram, which was internally validated with bootstrap resamples. The external validation was done retrospectively on a cohort of residents who graduated from two other independent internal medicine residency programs between 2007 and 2011.
Results
Of the 194 Cleveland Clinic graduates used for the nomogram development, 175 (90.2%) successfully passed the ABIM certification examination. The final nomogram included four predictors: In-Training Examination (ITE) scores in postgraduate year (PGY) 1, 2, and 3, and the number of months of overnight calls in the last 6 months of residency. The nomogram achieved a concordance index (CI) of 0.98 after correcting for over-fitting bias and allowed for the determination of an estimated probability of passing the ABIM exam. Of the 126 graduates from two other residency programs used for external validation, 116 (92.1%) passed the ABIM examination. The nomogram CI in the external validation cohort was 0.94, suggesting outstanding discrimination.
Conclusions
A simple user-friendly predictive tool, based on readily available data, was developed to predict the probability of passing the ABIM exam for internal medicine residents. This may guide program directors’ decision-making related to program curriculum and advice given to individual residents regarding board preparation.
doi:10.3402/meo.v17i0.18810
PMCID: PMC3475012  PMID: 23078794
board examination; in-training examination; internal medicine; residents; program directors
3.  Are Commonly Used Resident Measurements Associated with Procedural Skills in Internal Medicine Residency Training? 
Background
Acquisition of competence in performing a variety of procedures is essential during Internal Medicine (IM) residency training.
Purposes
Determine the rate of procedural complications by IM residents; determine whether there was a correlation between having 1 or more complications and institutional procedural certification status or attending ratings of resident procedural skill competence on the American Board of Internal Medicine (ABIM) monthly evaluation form (ABIM-MEF). Assess if an association exists between procedural complications and in-training examination and ABIM board certification scores.
Methods
We retrospectively reviewed all procedure log sheets, procedural certification status, ABIM-MEF procedural skills ratings, in-training exam and certifying examination (ABIM-CE) scores from the period 1990–1999 for IM residency program graduates from a training program.
Results
Among 69 graduates, 2,212 monthly procedure log sheets and 2,475 ABIM-MEFs were reviewed. The overall complication rate was 2.3/1,000 procedures (95% CI: 1.4–3.1/1,000 procedure). With the exception of procedural certification status as judged by institutional faculty, there was no association between our resident measurements and procedural complications.
Conclusions
Our findings support the need for a resident procedural competence certification system based on direct observation. Our data support the ABIM’s action to remove resident procedural competence from the monthly ABIM-MEF ratings.
doi:10.1007/s11606-006-0068-1
PMCID: PMC1824756  PMID: 17356968
procedural skills; Internal Medicine residency training program; ABIM evaluation
4.  Are Commonly Used Resident Measurements Associated with Procedural Skills in Internal Medicine Residency Training? 
Background
Acquisition of competence in performing a variety of procedures is essential during Internal Medicine (IM) residency training.
Purposes
Determine the rate of procedural complications by IM residents; determine whether there was a correlation between having 1 or more complications and institutional procedural certification status or attending ratings of resident procedural skill competence on the American Board of Internal Medicine (ABIM) monthly evaluation form (ABIM-MEF). Assess if an association exists between procedural complications and in-training examination and ABIM board certification scores.
Methods
We retrospectively reviewed all procedure log sheets, procedural certification status, ABIM-MEF procedural skills ratings, in-training exam and certifying examination (ABIM-CE) scores from the period 1990–1999 for IM residency program graduates from a training program.
Results
Among 69 graduates, 2,212 monthly procedure log sheets and 2,475 ABIM-MEFs were reviewed. The overall complication rate was 2.3/1,000 procedures (95% CI: 1.4–3.1/1,000 procedure). With the exception of procedural certification status as judged by institutional faculty, there was no association between our resident measurements and procedural complications.
Conclusions
Our findings support the need for a resident procedural competence certification system based on direct observation. Our data support the ABIM’s action to remove resident procedural competence from the monthly ABIM-MEF ratings.
doi:10.1007/s11606-006-0068-1
PMCID: PMC1824756  PMID: 17356968
procedural skills; Internal Medicine residency training program; ABIM evaluation
5.  Associations between quality indicators of internal medicine residency training programs 
BMC Medical Education  2011;11:30.
Background
Several residency program characteristics have been suggested as measures of program quality, but associations between these measures are unknown. We set out to determine associations between these potential measures of program quality.
Methods
Survey of internal medicine residency programs that shared an online ambulatory curriculum on hospital type, faculty size, number of trainees, proportion of international medical graduate (IMG) trainees, Internal Medicine In-Training Examination (IM-ITE) scores, three-year American Board of Internal Medicine Certifying Examination (ABIM-CE) first-try pass rates, Residency Review Committee-Internal Medicine (RRC-IM) certification length, program director clinical duties, and use of pharmaceutical funding to support education. Associations assessed using Chi-square, Spearman rank correlation, univariate and multivariable linear regression.
Results
Fifty one of 67 programs responded (response rate 76.1%), including 29 (56.9%) community teaching and 17 (33.3%) university hospitals, with a mean of 68 trainees and 101 faculty. Forty four percent of trainees were IMGs. The average post-graduate year (PGY)-2 IM-ITE raw score was 63.1, which was 66.8 for PGY3s. Average 3-year ABIM-CE pass rate was 95.8%; average RRC-IM certification was 4.3 years. ABIM-CE results, IM-ITE results, and length of RRC-IM certification were strongly associated with each other (p < 0.05). PGY3 IM-ITE scores were higher in programs with more IMGs and in programs that accepted pharmaceutical support (p < 0.05). RRC-IM certification was shorter in programs with higher numbers of IMGs. In multivariable analysis, a higher proportion of IMGs was associated with 1.17 years shorter RRC accreditation.
Conclusions
Associations between quality indicators are complex, but suggest that the presence of IMGs is associated with better performance on standardized tests but decreased duration of RRC-IM certification.
doi:10.1186/1472-6920-11-30
PMCID: PMC3126786  PMID: 21651768
program quality; Residency Review Committee; American Board of Internal Medicine Certifying Examination
6.  Relationship between internal medicine program board examination pass rates, accreditation standards, and program size 
Objectives: To determine Internal Medicine residency program compliance with the Accreditation Council for Graduate Medical Education 80% pass-rate standard and the correlation between residency program size and performance on the American Board of Internal Medicine Certifying Examination.
Methods
Using a cross-sectional study design from 2010-2012 American Board of Internal Medicine Certifying Examination data of all Internal Medicine residency programs, comparisons were made between program pass rates to the Accreditation Council for Graduate Medical Education pass-rate standard. To assess the correlation between program size and performance, a Spearman’s rho was calculated. To evaluate program size and its relationship to the pass-rate standard, receiver operative characteristic curves were calculated.
Results
Of 372 Internal Medicine residency programs, 276 programs (74%) achieved a pass rate of ≥80%, surpassing the Accreditation Council for Graduate Medical Education minimum standard. A weak correlation was found between residency program size and pass rate for the three-year period (ρ=0.19, p<0.001). The area underneath the receiver operative characteristic curve was 0.69 (95% Confidence Interval [0.63-0.75]), suggesting programs with less than 12 examinees/year are less likely to meet the minimum Accreditation Council for Graduate Medical Education pass-rate standard (sensitivity 63.8%, specificity 60.4%, positive predictive value 82.2%, p<0.001).
Conclusions
Although a majority of Internal Medicine residency programs complied with Accreditation Council for Graduate Medical Education pass-rate standards, a quarter of the programs failed to meet this requirement. Program size is positively but weakly associated with American Board of Internal Medicine Certifying Examination performance, suggesting other unidentified variables significantly contribute to program performance.
doi:10.5116/ijme.52c5.6602
PMCID: PMC4207188  PMID: 25341205
Certification; educational measurement; ROC curve; sensitivity and specificity; specialty boards
7.  Protocol-directed care in the ICU: making a future generation of intensivists less knowledgeable? 
Critical Care  2012;16(2):307.
Expanded abstract
Citation
Prasad M, Holmboe ES, Lipner RS, Hess BJ, Christie JD, Bellamy SL, Rubenfeld GD, Kahn JM. Clinical Protocols and Trainee Knowledge About Mechanical Ventilation. JAMA. 2011; 306(9):935-941. PubMed PMID: 21900133 This is available on http://www.pubmed.gov
Background
Clinical protocols are associated with improved patient outcomes; however, they may negatively affect medical education by removing trainees from clinical decision making.
Methods
Objective: To study the relationship between critical care training with mechanical ventilation protocols and subsequent knowledge about ventilator management.
Design: A retrospective cohort equivalence study linking a national survey of mechanical ventilation protocol availability with knowledge about mechanical ventilation. Exposure to protocols was defined as high intensity if an intensive care unit had 2 or more protocols for at least 3 years and as low intensity if 0 or 1 protocol.
Setting: Accredited US pulmonary and critical care fellowship programs.
Subjects: First-time examinees of the American Board of Internal Medicine (ABIM) Critical Care Medicine Certification Examination in 2008 and 2009.
Intervention: N/A
Outcomes: Knowledge, measured by performance on examination questions specific to mechanical ventilation management, calculated as a mechanical ventilation score using item response theory. The score is standardized to a mean (SD) of 500 (100), and a clinically important difference is defined as 25. Variables included in adjusted analyses were birth country, residency training country, and overall first-attempt score on the ABIM Internal Medicine Certification Examination.
Results
The 90 of 129 programs (70%) responded to the survey. Seventy seven programs (86%) had protocols for ventilation liberation, 66 (73%) for sedation management, and 54 (60%) for lung-protective ventilation at the time of the survey. Eighty eight (98%) of these programs had trainees who completed the ABIM Critical Care Medicine Certification Examination, totaling 553 examinees. Of these 88 programs, 27 (31%) had 0 protocols, 19 (22%) had 1 protocol, 24 (27%) had 2 protocols, and 18 (20%) had 3 protocols for at least 3 years. 42 programs (48%) were classified as high intensity and 46 (52%) as low intensity, with 304 trainees (55%) and 249 trainees (45%), respectively. In bi-variable analysis, no difference in mean scores was observed in high-intensity (497; 95% CI, 486-507) vs low-intensity programs (497; 95% CI, 485-509). Mean difference was 0 (95% CI, -16 to 16), with a positive value indicating a higher score in the high-intensity group. In multivariable analyses, no association of training was observed in a high-intensity program with mechanical ventilation score (adjusted mean difference, -5.36; 95% CI, -20.7 to 10.0).
Conclusions
Among first-time ABIM Critical Care Medicine Certification Examination examinees, training in a high-intensity ventilator protocol environment compared with a low-intensity environment was not associated with worse performance on examination questions about mechanical ventilation management.
doi:10.1186/cc11257
PMCID: PMC3681378  PMID: 22494787
8.  Burnout and Distress Among Internal Medicine Program Directors: Results of A National Survey 
Journal of General Internal Medicine  2013;28(8):1056-1063.
BACKGROUND
Physician burnout and distress has been described in national studies of practicing physicians, internal medicine (IM) residents, IM clerkship directors, and medical school deans. However, no comparable national data exist for IM residency program directors.
OBJECTIVE
To assess burnout and distress among IM residency program directors, and to evaluate relationships of distress with personal and program characteristics and perceptions regarding implementation and consequences of Accreditation Council for Graduate Medical Education (ACGME) regulations.
DESIGN AND PARTICIPANTS
The 2010 Association of Program Directors in Internal Medicine (APDIM) Annual Survey, developed by the APDIM Survey Committee, was sent in August 2010 to the 377 program directors with APDIM membership, representing 99.0 % of the 381 United States categorical IM residency programs.
MAIN MEASURES
The 2010 APDIM Annual Survey included validated items on well-being and distress, including questions addressing quality of life, satisfaction with work-life balance, and burnout. Questions addressing personal and program characteristics and perceptions regarding implementation and consequences of ACGME regulations were also included.
KEY RESULTS
Of 377 eligible program directors, 282 (74.8 %) completed surveys. Among respondents, 12.4 % and 28.8 % rated their quality of life and satisfaction with work-life balance negatively, respectively. Also, 27.0 % reported emotional exhaustion, 10.4 % reported depersonalization, and 28.7 % reported overall burnout. These rates were lower than those reported previously in national studies of medical students, IM residents, practicing physicians, IM clerkship directors, and medical school deans. Aspects of distress were more common among younger program directors, women, and those reporting greater weekly work hours. Work–home conflicts were common and associated with all domains of distress, especially if not resolved in a manner effectively balancing work and home responsibilities. Associations with program characteristics such as program size and American Board of Internal Medicine (ABIM) pass rates were not found apart from higher rates of depersonalization among directors of community-based programs (23.5 % vs. 8.6 %, p = 0.01). We did not observe any consistent associations between distress and perceptions of implementation and consequences of program regulations.
CONCLUSIONS
The well-being of IM program directors across domains, including quality of life, satisfaction with work-life balance, and burnout, appears generally superior to that of medical trainees, practicing physicians, and other medical educators nationally. Additionally, it is reassuring that program directors' perceptions of their ability to respond to current regulatory requirements are not adversely associated with distress. However, the increased distress levels among younger program directors, women, and those at community-based training programs reported in this study are important concerns worthy of further study.
doi:10.1007/s11606-013-2349-9
PMCID: PMC3710382  PMID: 23595924
graduate medical education; residency; burnout; well-being
9.  Correlation of the Emergency Medicine Resident In-Service Examination with the American Osteopathic Board of Emergency Medicine Part I 
Introduction: Eligible residents during their fourth postgraduate year (PGY-4) of emergency medicine (EM) residency training who seek specialty board certification in emergency medicine may take the American Osteopathic Board of Emergency Medicine (AOBEM) Part 1 Board Certifying Examination (AOBEM Part 1). All residents enrolled in an osteopathic EM residency training program are required to take the EM Resident In-service Examination (RISE) annually. Our aim was to correlate resident performance on the RISE with performance on the AOBEM Part 1. The study group consisted of osteopathic EM residents in their PGY-4 year of training who took both examinations during that same year.
Methods: We examined data from 2009 to 2012 from the National Board of Osteopathic Medical Examiners (NBOME). The NBOME grades and performs statistical analyses on both the RISE and the AOBEM Part 1. We used the RISE exam scores, as reported by percentile rank, and compared them to both the score on the AOBEM Part 1 and the dichotomous outcome of passing or failing. A receiver operating characteristic (ROC) curve was generated to depict the relationship.
Results: We studied a total of 409 residents over the 4-year period. The RISE percentile score correlated strongly with the AOBEM Part 1 score for residents who took both exams in the same year (r=0.61, 95% confidence interval [CI] 0.54 to 0.66). Pass percentage on the AOBEM Part 1 increased by resident percent decile on the RISE from 0% in the bottom decile to 100% in the top decile. ROC analysis also showed that the best cutoff for determining pass or fail on the AOBEM Part 1 was a 65th percentile score on the RISE.
Conclusion: We have shown there is a strong correlation between a resident's percentile score on the RISE during their PGY-4 year of residency training and first-time success on the AOBEM Part 1 taken during the same year. This information may be useful for osteopathic EM residents as an indicator as to how well prepared they are for the AOBEM Part 1 Board Certifying Examination.
doi:10.5811/westjem.2013.7.17904
PMCID: PMC3952889  PMID: 24696749
10.  Procedural Experience and Comfort Level in Internal Medicine Trainees 
BACKGROUND
The American Board of Internal Medicine (ABIM) has recommended a specific number of procedures be done as a minimum standard for ensuring competence in various medical procedures. These minimum standards were determined by consensus of an expert panel and may not reflect actual procedural comfort or competence.
OBJECTIVE
To estimate the minimum number of selected procedures at which a majority of internal medicine trainees become comfortable performing that procedure.
DESIGN
Cross-sectional, self-administered survey.
SETTING
A military-based, a community-based, and 2 university-based programs.
PARTICIPANTS
Two hundred thirty-two internal medicine residents.
MEASUREMENTS
Survey questions included number of specific procedures performed, comfort level with performing specific procedures, and whether respondents desired further training in specific procedures. The comfort threshold for a given procedure was defined as the number of procedures at which two thirds or more of the respondents reported being comfortable or very comfortable performing that procedure.
RESULTS
For three of seven procedures selected, residents were comfortable performing the procedure at or below the number recommended by the ABIM as a minimum requirement. However, residents needed more procedures than recommended by the ABIM to feel comfortable with central venous line placement, knee joint aspiration, lumbar puncture, and thoracentesis. Using multivariate logistic regression analysis, variables independently associated with greater comfort performing selected procedures included increased number performed, more years of training, male gender, career goals, and for skin biopsy, training in the community-based program. Except for skin biopsy, comfort level was independent of training site. A significant number of advanced-year house officers in some programs had little experience in performing selected common ambulatory procedures.
CONCLUSION
Minimum standards for certifying internal medicine residents may need to be reexamined in light of house officer comfort level performing selected procedures.
doi:10.1046/j.1525-1497.2000.91104.x
PMCID: PMC1495602  PMID: 11089715
ABIM; procedure comfort level; residents
11.  Description of a Developmental Criterion-Referenced Assessment for Promoting Competence in Internal Medicine Residents 
Rationale
End-of- rotation global evaluations can be subjective, produce inflated grades, lack interrater reliability, and offer information that lacks value. This article outlines the generation of a unique developmental criterion-referenced assessment that applies adult learning theory and the learner, manager, teacher model, and represents an innovative application to the American Board of Internal Medicine (ABIM) 9-point scale.
Intervention
We describe the process used by Southern Illinois University School of Medicine to develop rotation-specific, criterion-based evaluation anchors that evolved into an effective faculty development exercise.
Results
The intervention gave faculty a clearer understanding of the 6 Accreditation Council for Graduate Medical Education competencies, each rotation's educational goals, and how rotation design affects meaningful work-based assessment. We also describe easily attainable successes in evaluation design and pitfalls that other institutions may be able to avoid. Shifting the evaluation emphasis on the residents' development of competence has made the expectations of rotation faculty more transparent, has facilitated conversations between program director and residents, and has improved the specificity of the tool for feedback. Our findings showed the new approach reduced grade inflation compared with the ABIM end-of-rotation global evaluation form.
Discussion
We offer the new developmental criterion-referenced assessment as a unique application of the competences to the ABIM 9-point scale as a transferable model for improving the validity and reliability of resident evaluations across graduate medical education programs.
doi:10.4300/01.01.0012
PMCID: PMC2931180  PMID: 21975710
12.  Development of an Ambulatory Geriatrics Knowledge Examination for Internal Medicine Residents 
Background
The number of older adults needing primary care exceeds the capacity of trained geriatricians to accommodate them. All physicians should have basic knowledge of optimal outpatient care of older adults to enhance the capacity of the system to serve this patient group. To date, there is no knowledge-assessment tool that focuses specifically on geriatric ambulatory care.
Objective
We developed an examination to assess internal medicine residents' knowledge of ambulatory geriatrics.
Methods
A consensus panel developed a 30-question examination based on topics in the American Board of Internal Medicine (ABIM) Certification Examination Blueprint, the ABIM in-training examinations, and the American Geriatrics Society Goals and Objectives. Questions were reviewed, edited, and then administered to medical students, internal medicine residents, primary care providers, and geriatricians.
Results
Ninety-eight individuals (20 fourth-year medical students, 57 internal medicine residents, 11 primary care faculty members, and 10 geriatrics fellowship-trained physicians) took the examination. Based on psychometric analysis of the results, 5 questions were deleted because of poor discriminatory power. The Cronbach α coefficient of the remaining 25 questions was 0.48; however, assessment of interitem consistency may not be an appropriate measure, given the variety of clinical topics on which questions were based. Scores increased with higher levels of training in geriatrics (P < .001).
Conclusion
Our preliminary study suggests that the examination we developed is a reasonably valid method to assess knowledge of ambulatory geriatric care and may be useful in assessing residents.
doi:10.4300/JGME-D-13-00123.1
PMCID: PMC3886473  PMID: 24455023
13.  Charting the Road to Competence: Developmental Milestones for Internal Medicine Residency Training 
Background
The Accreditation Council for Graduate Medical Education (ACGME) Outcome Project requires that residency program directors objectively document that their residents achieve competence in 6 general dimensions of practice.
Intervention
In November 2007, the American Board of Internal Medicine (ABIM) and the ACGME initiated the development of milestones for internal medicine residency training. ABIM and ACGME convened a 33-member milestones task force made up of program directors, experts in evaluation and quality, and representatives of internal medicine stakeholder organizations. This article reports on the development process and the resulting list of proposed milestones for each ACGME competency.
Outcomes
The task force adopted the Dreyfus model of skill acquisition as a framework the internal medicine milestones, and calibrated the milestones with the expectation that residents achieve, at a minimum, the “competency” level in the 5-step progression by the completion of residency. The task force also developed general recommendations for strategies to evaluate the milestones.
Discussion
The milestones resulting from this effort will promote competency-based resident education in internal medicine, and will allow program directors to track the progress of residents and inform decisions regarding promotion and readiness for independent practice. In addition, the milestones may guide curriculum development, suggest specific assessment strategies, provide benchmarks for resident self-directed assessment-seeking, and assist remediation by facilitating identification of specific deficits. Finally, by making explicit the profession's expectations for graduates and providing a degree of national standardization in evaluation, the milestones may improve public accountability for residency training.
doi:10.4300/01.01.0003
PMCID: PMC2931179  PMID: 21975701
14.  The historic predictive value of Canadian orthopedic surgery residents’ orthopedic in-training examination scores on their success on the RCPSC certification examination 
Canadian Journal of Surgery  2014;57(4):260-262.
Background
Positive correlation between the orthopedic in-training examination (OITE) and success in the American Board of Orthopaedic Surgery examination has been reported. Canadian training programs in internal medicine, anesthesiology and urology have found a positive correlation between in-training examination scores and performance on the Royal College of Physicians and Surgeons of Canada (RCPSC) certification examination. We sought to determine the potential predictive value of the OITE scores of Canadian orthopedic surgery residents on their success on their RCPSC examinations.
Methods
A total of 118 Canadian orthopedic surgery residents had their annual OITE scores during their 5 years of training matched to the RCPSC examination oral and multiple-choice questions and to overall examination pass/fail scores. We calculated Pearson correlations between the in-training examination for each postgraduate year and the certification oral and multiple-choice questions and pass/fail marks.
Results
There was a predictive association between the OITE and success on the RCPSC examination. The association was strongest between the OITE and the written multiple-choice examination and weakest between the OITE and the overall examination pass/fail marks.
Conclusion
Overall, the OITE was able to provide useful feedback to Canadian orthopedic surgery residents and their training programs in preparing them for their RCPSC examinations. However, when these data were collected, truly normative data based on a Canadian sample were not available. Further study is warranted based on a more refined analysis of the OITE, which is now being produced and includes normative percentile data based on Canadian residents.
doi:10.1503/cjs.014913
PMCID: PMC4119118  PMID: 25078931
15.  Patients’ assessment of professionalism and communication skills of medical graduates 
BMC Medical Education  2014;14:28.
Background
Professionalism and communication skills constitute important components of the integral formation of physicians which has repercussion on the quality of health care and medical education. The objective of this study was to assess medical graduates’ professionalism and communication skills from the patients’ perspective and to examine its association with patients’ socio-demographic variables.
Methods
This is a hospital based cross-sectional study. It involved 315 patients and 105 medical graduates selected by convenient sampling method. A modified and validated version of the American Board of Internal Medicine’s (ABIM) Patient Assessment survey questionnaire was used for data collection through a face to face interview. Data processing and analysis were performed using the Statistical Package for Social Science (SPSS) 16.0. Mean, frequency distribution, and percentage of the variables were calculated. A non-parametric Kruskal Wallis test was applied to verify whether the patients’ assessment was influenced by variables such as age, gender, education, at a level of significance, p ≤ 0.05.
Results
Female patients constituted 46% of the sample, whereas males constituted 54%. The mean age was 36 ± 16. Patients’ scoring of the graduate’s skills ranged from 3.29 to 3.83 with a mean of 3.64 on a five-point Likert scale. Items assessing the “patient involvement in decision-making” were assigned the minimum mean values, while items dealing with “establishing adequate communication with patient” assigned the maximum mean values. Patients, who were older than 45 years, gave higher scores than younger ones (p < 0.001). Patients with higher education reported much lower scores than those with lower education (p = 0.003). Patients’ gender did not show any statistically significant influence on the rating level.
Conclusion
Generally patients rated the medical graduates’ professionalism and communication skills at a good level. Patients’ age and educational level were significantly associated with the rating level.
doi:10.1186/1472-6920-14-28
PMCID: PMC3923249  PMID: 24517316
16.  A tool for self-assessment of communication skills and professionalism in residents 
Background
Effective communication skills and professionalism are critical for physicians in order to provide optimum care and achieve better health outcomes. The aims of this study were to evaluate residents' self-assessment of their communication skills and professionalism in dealing with patients, and to evaluate the psychometric properties of a self-assessment questionnaire.
Methods
A modified version of the American Board of Internal Medicine's (ABIM) Patient Assessment survey was completed by 130 residents in 23 surgical and non-surgical training programs affiliated with a single medical school. Descriptive, regression and factor analyses were performed. Internal consistency, inter-item gamma scores, and discriminative validity of the questionnaire were determined.
Results
Factor analysis suggested two groups of items: one group relating to developing interpersonal relationships with patients and one group relating to conveying medical information to patients. Cronbach's alpha (0.86) indicated internal consistency. Males rated themselves higher than females in items related to explaining things to patients. When compared to graduates of U.S. medical schools, graduates of medical schools outside the U.S. rated themselves higher in items related to listening to the patient, yet lower in using understandable language. Surgical residents rated themselves higher than non-surgical residents in explaining options to patients.
Conclusion
This appears to be an internally consistent and reliable tool for residents' self-assessment of communication skills and professionalism. Some demographic differences in self-perceived communication skills were noted.
doi:10.1186/1472-6920-9-1
PMCID: PMC2631014  PMID: 19133146
17.  Teaching Internal Medicine Residents Quality Improvement Techniques using the ABIM’s Practice Improvement Modules 
Summary
Introduction/aim
Standard curricula to teach Internal Medicine residents about quality assessment and improvement, important components of the Accreditation Council for Graduate Medical Education core competencies practiced-based learning and improvement (PBLI) and systems-based practice (SBP), have not been easily accessible.
Program description
Using the American Board of Internal Medicine’s (ABIM) Clinical Preventative Services Practice Improvement Module (CPS PIM), we have incorporated a longitudinal quality assessment and improvement curriculum (QAIC) into the 2 required 1-month ambulatory rotations during the postgraduate year 2. During the first block, residents complete the PIM chart reviews, patient, and system surveys. The second block includes resident reflection using PIM data and the group performing a small test of change using the Plan–Do–Study–Act (PDSA) cycle in the resident continuity clinic.
Program Evaluation
To date, 3 resident quality improvement (QI) projects have been undertaken as a result of QAIC, each making significant improvements in the residents’ continuity clinic. Resident confidence levels in QI skills (e.g., writing an aim statement [71% to 96%, P < .01] and using a PDSA cycle [9% to 89%, P < .001]) improved significantly.
Discussion
The ABIM CPS PIM can be used by Internal Medicine residency programs to introduce QI concepts into their residents’ outpatient practice through encouraging practice-based learning and improvement and systems-based practice.
doi:10.1007/s11606-008-0549-5
PMCID: PMC2517947  PMID: 18449612
Internal Medicine residents; quality improvement; practiced-based learning and improvement; systems-based practice; practice improvement module
18.  PREDICTIVE MEASURES OF A RESIDENT'S PERFORMANCE ON WRITTEN ORTHOPAEDIC BOARD SCORES 
The Iowa Orthopaedic Journal  2011;31:238-243.
Objective
Residency programs are continually attempting to predict the performance of both current and potential residents. Previous studies have supported the use of USMLE Steps 1 and 2 as predictors of Orthopaedic In-Training Examination (OITE) and eventual American Board of Orthopaedic Surgery success, while others show no significant correlation. A strong performance on OITE examinations does correlate with strong residency performance, and some believe OITE scores are good predictors of future written board success. The current study was designed to examine potential differences in resident assessment measures and their predictive value for written boards.
Design/Methods
A retrospective review of resident performance data was performed for the past 10 years. Personalized information was removed by the residency coordinator. USMLE Step 1, USMLE Step 2, Orthopaedic In-Training Examination (from first to fifth years of training), and written orthopaedic specialty board scores were collected. Subsequently, the residents were separated into two groups, those scoring above the 35th percentile on written boards and those scoring below. Data were analyzed using correlation and regression analyses to compare and contrast the scores across all tests.
Results
A significant difference was seen between the groups in regard to USMLE scores for both Step 1 and 2. Also, a significant difference was found between OITE scores for both the second and fifth years. Positive correlations were found for USMLE Step 1, Step 2, OITE 2 and OITE 5 when compared to performance on written boards. One resident initially failed written boards, but passed on the second attempt This resident consistently scored in the 20th and 30th percentiles on the in-training examinations.
Conclusions
USMLE Step 1 and 2 scores along with OITE scores are helpful in gauging an orthopaedic resident’s performance on written boards. Lower USMLE scores along with consistently low OITE scores likely identify residents at risk of failing their written boards. Close monitoring of the annual OITE scores is recommended and may be useful to identify struggling residents. Future work involving multiple institutions is warranted and would ensure applicability of our findings to other orthopedic residency programs.
PMCID: PMC3215143  PMID: 22096449
19.  Setting a Fair Performance Standard for Physicians’ Quality of Patient Care 
Background
Assessing physicians’ clinical performance using statistically sound, evidence-based measures is challenging. Little research has focused on methodological approaches to setting performance standards to which physicians are being held accountable.
Objective
Determine if a rigorous approach for setting an objective, credible standard of minimally-acceptable performance could be used for practicing physicians caring for diabetic patients.
Design
Retrospective cohort study.
Participants
Nine hundred and fifty-seven physicians from the United States with time-limited certification in internal medicine or a subspecialty.
Main Measures
The ABIM Diabetes Practice Improvement Module was used to collect data on ten clinical and two patient experience measures. A panel of eight internists/subspecialists representing essential perspectives of clinical practice applied an adaptation of the Angoff method to judge how physicians who provide minimally-acceptable care would perform on individual measures to establish performance thresholds. Panelists then rated each measure’s relative importance and the Dunn–Rankin method was applied to establish scoring weights for the composite measure. Physician characteristics were used to support the standard-setting outcome.
Key Results
Physicians abstracted 20,131 patient charts and 18,974 patient surveys were completed. The panel established reasonable performance thresholds and importance weights, yielding a standard of 48.51 (out of 100 possible points) on the composite measure with high classification accuracy (0.98). The 38 (4%) outlier physicians who did not meet the standard had lower ratings of overall clinical competence and professional behavior/attitude from former residency program directors (p = 0.01 and p = 0.006, respectively), lower Internal Medicine certification and maintenance of certification examination scores (p = 0.005 and p < 0.001, respectively), and primarily worked as solo practitioners (p = 0.02).
Conclusions
The standard-setting method yielded a credible, defensible performance standard for diabetes care based on informed judgment that resulted in a reasonable, reproducible outcome. Our method represents one approach to identifying outlier physicians for intervention to protect patients.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-010-1572-x) contains supplementary material, which is available to authorized users.
doi:10.1007/s11606-010-1572-x
PMCID: PMC3077491  PMID: 21104453
clinical performance assessment; standard setting; composite measures; diabetes care
20.  Developing Educators, Investigators, and Leaders During Internal Medicine Residency: The Area of Distinction Program 
Background
Professional organizations have called for individualized training approaches, as well as for opportunities for resident scholarship, to ensure that internal medicine residents have sufficient knowledge and experience to make informed career choices.
Context and Purpose
To address these training issues within the University of California, San Francisco, internal medicine program, we created the Areas of Distinction (AoD) program to supplement regular clinical duties with specialized curricula designed to engage residents in clinical research, global health, health equities, medical education, molecular medicine, or physician leadership. We describe our AoD program and present this initiative's evaluation data.
Methods and Program Evaluation
We evaluated features of our AoD program, including program enrollment, resident satisfaction, recruitment surveys, quantity of scholarly products, and the results of our resident's certifying examination scores. Finally, we described the costs of implementing and maintaining the AoDs.
Results
AoD enrollment increased from 81% to 98% during the past 5 years. Both quantitative and qualitative data demonstrated a positive effect on recruitment and improved resident satisfaction with the program, and the number and breadth of scholarly presentations have increased without an adverse effect on our board certification pass rate.
Conclusions
The AoD system led to favorable outcomes in the domains of resident recruitment, satisfaction, scholarship, and board performance. Our intervention showed that residents can successfully obtain clinical training while engaging in specialized education beyond the bounds of core medicine training. Nurturing these interests 5 empower residents to better shape their careers by providing earlier insight into internist roles that transcend classic internal medicine training.
doi:10.4300/JGME-D-11-00029.1
PMCID: PMC3244321  PMID: 23205204
21.  Factors Associated with American Board of Medical Specialties Member Board Certification among US Medical School Graduates 
Context
Certification by an American Board of Medical Specialties (ABMS) member board is emerging as a measure of physician quality.
Objective
To identify demographic and educational factors associated with ABMS-member-board certification of US medical graduates.
Design, Setting, Participants
Retrospective study of a national cohort of 1997–2000 US medical graduates, grouped by specialty choice at graduation and followed up through March 2, 2009. In separate multivariable logistic regression models for each specialty category, factors associated with ABMS-member-board certification were identified.
Main Outcome Measure
ABMS-member-board certification
Results
Of 42 440 graduates in the study sample, 37 054 (87.3%) were board certified. Graduates in all specialty categories with first-attempt passing scores in the highest tertile (vs first-attempt failing scores) on US Medical Licensing Examination Step 2 Clinical Knowledge were more likely to be board certified; adjusted odds ratios (aOR) varied by specialty category with the lowest odds for emergency medicine (87.4% vs 73.6%; aOR, 1.82; 95% confidence interval [CI], 1.03–3.20) and highest odds for radiology (98.1% vs 74.9%; aOR, 13.19; 95% CI, 5.55–31.32). In each specialty category except family medicine, graduates self-identified as underrepresented racial/ethnic minorities (vs white) were less likely to be board certified, ranging from 83.5% vs 95.6% in the pediatrics category (aOR, 0.44; 95% CI, 0.33–0.58) to 71.5% vs 83.7% in the other non-generalist specialties category (aOR, 0.79; 95% CI, 0.64–0.96). With each $50 000 unit increase in debt (vs no debt), graduates choosing obstetrics/gynecology were less likely to be board certified (aOR, 0.89; 95% CI, 0.83–0.96), and graduates choosing family medicine were more likely to be board certified (aOR 1.13; 95% CI, 1.01–1.26).
Conclusion
Demographic and educational factors were associated with board certification among US medical graduates in every specialty category examined; findings varied among specialty categories.
doi:10.1001/jama.2011.1099
PMCID: PMC3217584  PMID: 21900136
22.  Education Research: Bias and poor interrater reliability in evaluating the neurology clinical skills examination 
Neurology  2009;73(11):904-908.
Objective:
The American Board of Psychiatry and Neurology (ABPN) has recently replaced the traditional, centralized oral examination with the locally administered Neurology Clinical Skills Examination (NEX). The ABPN postulated the experience with the NEX would be similar to the Mini-Clinical Evaluation Exercise, a reliable and valid assessment tool. The reliability and validity of the NEX has not been established.
Methods:
NEX encounters were videotaped at 4 neurology programs. Local faculty and ABPN examiners graded the encounters using 2 different evaluation forms: an ABPN form and one with a contracted rating scale. Some NEX encounters were purposely failed by residents. Cohen’s kappa and intraclass correlation coefficients (ICC) were calculated for local vs ABPN examiners.
Results:
Ninety-eight videotaped NEX encounters of 32 residents were evaluated by 20 local faculty evaluators and 18 ABPN examiners. The interrater reliability for a determination of pass vs fail for each encounter was poor (kappa 0.32; 95% confidence interval [CI] = 0.11, 0.53). ICC between local faculty and ABPN examiners for each performance rating on the ABPN NEX form was poor to moderate (ICC range 0.14-0.44), and did not improve with the contracted rating form (ICC range 0.09-0.36). ABPN examiners were more likely than local examiners to fail residents.
Conclusions:
There is poor interrater reliability between local faculty and American Board of Psychiatry and Neurology examiners. A bias was detected for favorable assessment locally, which is concerning for the validity of the examination. Further study is needed to assess whether training can improve interrater reliability and offset bias.
GLOSSARY
= American Board of Internal Medicine;
= American Board of Psychiatry and Neurology;
= confidence interval;
= Henry Ford Hospital;
= intraclass correlation coefficients;
= internal medicine;
= Mini-Clinical Evaluation Exercise;
= Neurology Clinical Skills Examination;
= residency inservice training examination;
= University of Cincinnati;
= University of Michigan;
= University of South Florida.
doi:10.1212/WNL.0b013e3181b35212
PMCID: PMC2839551  PMID: 19605769
23.  Focused Board Intervention (FBI): A Remediation Program for Written Board Preparation and the Medical Knowledge Core Competency 
Background
Residents deemed at risk for low performance on standardized examinations require focused attention and remediation.
Objective
To determine whether a remediation program for residents identified as at risk for failure on the Emergency Medicine (EM) Written Board Examination is associated with improved outcomes.
Intervention
All residents in 8 classes of an EM 1–3 program were assessed using the In-Training Examination. Residents enrolled in the Focused Board Intervention (FBI) remediation program based on an absolute score on the EM 3 examination of <70% or a score more than 1 SD below the national mean on the EM 1 or 2 examination. Individualized education plans (IEPs) were created for residents in the FBI program, combining self-study audio review lectures with short-answer examinations. The association between first-time pass rate for the American Board of Emergency Medicine (ABEM) Written Qualifying Examination (WQE) and completion of all IEPs was examined using the χ2 test.
Results
Of the 64 residents graduating and sitting for the ABEM examination between 2000 and 2008, 26 (41%) were eligible for the program. Of these, 10 (38%) residents were compliant and had a first-time pass rate of 100%. The control group (12 residents who matched criteria but graduated before the FBI program was in place and 4 who were enrolled but failed to complete the program) had a 44% pass rate (7 of 16), which was significantly lower (χ2  =  8.6, P  =  .003).
Conclusions
The probability of passing the ABEM WQE on the first attempt was improved through the completion of a structured IEP.
doi:10.4300/JGME-D-12-00229.1
PMCID: PMC3771177  PMID: 24404311
24.  An Assessment of Patient-Based and Practice Infrastructure–Based Measures of the Patient-Centered Medical Home: Do We Need to Ask the Patient? 
Health Services Research  2011;47(1 Pt 1):4-21.
Objective
To examine the importance of patient-based measures and practice infrastructure measures of the patient-centered medical home (PCMH).
Data Sources
A total of 3,671 patient surveys of 202 physicians completing the American Board of Internal Medicine (ABIM) 2006 Comprehensive Care Practice Improvement Module and 14,457 patient chart reviews from 592 physicians completing ABIM's 2007 Diabetes and Hypertension Practice Improvement Module.
Methodology
We estimated the association of patient-centered care and practice infrastructure measures with patient rating of physician quality. We then estimated the association of practice infrastructure and patient rating of care quality with blood pressure (BP) control.
Results
Patient-centered care measures dominated practice infrastructure as predictors of patient rating of physician quality. Having all patient-centered care measures in place versus none was associated with an absolute 75.2 percent increase in the likelihood of receiving a top rating. Both patient rating of care quality and practice infrastructure predicted BP control. Receiving a rating of excellent on care quality from all patients was associated with an absolute 4.2 percent improvement in BP control. For reaching the maximum practice-infrastructure score, this figure was 4.5 percent.
Conclusion
Assessment of physician practices for PCMH qualification should consider both patient based patient-centered care measures and practice infrastructure measures.
doi:10.1111/j.1475-6773.2011.01302.x
PMCID: PMC3447253  PMID: 22092245
Patient-centered care; practice infrastructure; medical home; blood pressure control
25.  Surgical Resident Accuracy in Predicting Their ABSITE Score 
Background:
The American Board of Surgery In-Training Examination (ABSITE) is given to all surgical residents as an assessment tool for residents and their programs in preparation for the American Board of Surgery qualifying and certifying examinations. Our objective was to ascertain how well surgical residents could predict their percentile score on the ABSITE using two predictor measures before and one immediately after the examination was completed.
Methods:
A survey was given to surgical residents in postgraduate year(s) (PGY) 2 through 5 as well as to research residents in November and December 2011, and immediately after the examination in January 2012, to ascertain their predicted ABSITE scores. Thirty-one general surgery residents were measured consisting of PGY-2 (22%), PGY-3 (19.4%), PGY-4 (19.4%), and PGY-5 (12.9%), and research residents 25.8%.
Results:
Mean prediction scores were consistently higher than actual examination scores for both junior and senior examination takers, with senior examination predictions exhibiting the highest proportion of variation on the actual examination score. Stratified linear regression analysis showed little predictive significance of all 3 examination predictions and actual score, except for the senior examination predictions in November 2011 (t test = 2.521, P = .027). We found no statistically significant difference in the proportion of residents overestimating or underestimating their predicted score. Secondary analysis using a linear regression model shows that 2011 scores were a statistically significant predictor of 2012 scores (overall F = 13.258, P = .001, R2 = 0.31) for both junior and senior examinations.
Conclusion:
General surgery residents were not able to accurately predict their ABSITE score; however, the previous year's actual scores were found to have the most predictive value of the next year's actual scores.
doi:10.4293/108680813X13753907290919
PMCID: PMC4035640  PMID: 24960493
ABSITE; In-training examination; Surgical education

Results 1-25 (809140)