PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of corrspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
 
Clin Orthop Relat Res. 2009 September; 467(9): 2436–2445.
Published online 2009 June 26. doi:  10.1007/s11999-009-0939-y
PMCID: PMC2866936

Predictors of Success on the American Board of Orthopaedic Surgery Examination

Abstract

Predictors of success of orthopaedic residents on the American Board of Orthopaedic Surgery (ABOS) examination are controversial. We therefore evaluated numerous variables that may suggest or predict candidate performance on the ABOS examination. We reviewed files of 161 residents (all graduates) from one residency program distributed into two study groups based on whether they passed or failed their first attempt on the ABOS Part I or Part II examination from 1991 through 2005. Predictors of success/failure on the ABOS I included the mean percentile score on the Orthopaedic In-Training Examination (OITE) (Years 2 through 4), the percentile OITE score in the last year of training, US Medical Licensing Examination (USMLE) score, Dean’s letter, election to Alpha Omega Alpha (AOA), and number of honors in selected third-year clerkships. All but the USMLE score predicted passing the ABOS Part II examination. These data suggest there are objective predictors of residents’ performance on the ABOS Part I and Part II examinations.

Introduction

A survey of residency program directors relating first-year performance to possible predictor variables (including USMLE Step 1 and Step 2 scores, grade point average, AOA election, and gender) suggested only the USMLE Step 2 predicted performance [1]. The relationship between the faculty evaluation of residents’ performance and scores on the OITE and the ABOS Part I Examination were reported from one residency program in orthopaedic surgery [7]. According to this study, the strongest predictors for positive faculty evaluations were the number of honors grades in the third and fourth years of medical school and election to AOA. A lesser correlation was noted with fine motor activities and publications. The only predictor for a strong score on the OITE was the Scholastic Aptitude Test. These authors reported no factors predicting success on the ABOS I examination.

Others have focused on resident selection criteria by program director surveys identifying specific desired performance outcomes of the residents. Gilbart et al. [8] surveyed 12 of 13 English-speaking orthopaedic programs in Canada. These programs reported a moderately high rating consistency among interviewers and developed a standard assessment form to reliably evaluate future applicants [8]. Various resident performance characteristics that correlated highly included individual work ethic, enthusiasm, orthopaedic experience, and interpersonal qualities [8]. Resident performance characteristics that correlated less highly included academic record, research, publications, ethical character, and outside interests [8]. Characteristics of applicants who successfully obtained a residency position in orthopaedic surgery included high scores on the Medical College Admission Test (MCAT), high scores on the National Board of Medical Examiners (NBME) Part I, high medical school grade point average, high class rank, and AOA membership [4]. The strongest predictors of success in obtaining an orthopaedic residency position were election to AOA and NBME I score above 500. Bernstein et al. [3] reviewed current selection criteria for orthopaedic residents using surveys and questionnaires of 109 program directors across the country. Twenty-six criteria were ranked by the program directors. The top three were rotation at the program director’s institution, USMLE Step 1 score, and the applicant’s rank in medical school, although the ranking did not specifically correlate with a resident’s performance [3]. Bernstein et al. recognized and agreed with Simon [16] who indicated “one in twelve resident selections was a serious mistake”. Most program directors agreed with Bernstein et al. that problem residents have issues in the affective domain. Bernstein et al. concluded our current system places too much emphasis on cognitive knowledge, and program directors need a better assessment of the affective domain such as the Defining Issue Test of Rest [14, 15]. Others have stressed the importance of other variables that should be assessed in residents and in medical students [15]. These variables include, in addition to academic performance, integrity, diligence, reliability, commitment, problem-solving, caring, respect, and interpersonal skills. Some have suggested most programs’ selection criteria are too reliant on test scores, class rankings, and grades; there is little evidence a correlation exists for achievements in residency and practice [13]; and the more important criteria of the selection process must include societal goals and responsibilities [18].

There have been attempts to develop a scoring system or quantitative assessment tool to screen applicants who will go on to perform well in their residency program and practice and be successful in passing the board examinations [5, 6, 17]. A scoring tool developed at the Mayo Clinic entitled the Quantitative Composite Scoring Tool appears to predict performance based on an early report. Outcomes measured included the OITE, the ABOS I and II examinations, and internal assessment of the resident by the faculty [17]. Klein et al. [10] reported USMLE Step 1 scores were not a strong predictor of passing the ABOS Part I examination, but scores below the 30th percentile on the OITE for senior residents were a strong predictor of potential failure of the ABOS Part I examination.

Therefore, we first hypothesized objective measures (OITE scores, USMLE Step I scores, election into AOA, and number of honors in key clinical clerkships) would predict success on Part I of the ABOS examination. Second, we hypothesized clinical performance measures such as faculty evaluations of residents and quality of Deans’ letters of recommendation would predict success on Part II of the ABOS examination.

Materials and Methods

Residents of one residency program were included in this study. We reviewed the records of each resident who failed the ABOS Part I or Part II examination from 1991 through 2005. A control group consisted of all graduated residents during this same period who successfully passed the ABOS Part I or Part II examination. The pass rates for all graduating residents on the ABOS Parts I and II examinations during this period were 88% (n = 157) and 93% (n = 150), respectively. From the residents’ files, we collected the following data: mean percentile score on the OITE (Years 2 through 4 combined) and mean percentile OITE score in Year 5 of training, USMLE Step 1 score, NBME score, number of honors received during clinical years in medical school (surgery, medicine, pediatrics, and orthopaedics), election to AOA, Dean’s letter, and faculty evaluations of resident performance.

Nineteen residents failed the ABOS I and 16 failed the ABOS II during this 15-year period. Complete data were available except as follows: three in the failure group did not have the fifth year’s score of the OITE in his or her record, one in the control group did not have the OITE score in his or her last year of training, and five in the control group did not have all 4 years of the OITE available. USMLE Step 1 or NBME Part I scores were not available to review in two records in the failure group and eight in the control group. Honors grades were not in the files of two residents in the failure group and in 13 in the control group. Dean’s letters were not available in the files of five residents in the control group. Forty-four graduates had been elected to AOA; two failed Part I and one failed Part II. Fifty-three graduates were not elected to AOA; nine failed Part I and six failed Part II. No AOA chapter was present in the graduate’s medical school in 64 cases; seven failed Part I and five failed Part II. Other data that were largely not available included MCAT scores, additional graduate degrees, and research experience. No relationship existed between rankings of the medical schools or gender of the candidates. We did not have enough records of our residents’ USMLE Step 2 scores to be included.

Performance on the USMLE examination was reported as above or below the national mean (210–220), and the NBME was reported above or below the mean of 500.

The applicants’ Deans’ letters were reviewed independently by three observers (JH, GD, AJ). Each letter was read in its entirety and then assigned by the observer’s best judgment to one of three categories: “1” = good, “2” = excellent, or “3” = most outstanding. The score assignments were by definition subjective, but every effort was made to judge as consistently as possible. The letter writers’ own wording was often among the most useful factors. Descriptors like “most outstanding” and “truly exceptional” were judged to indicate a “3” rating. More reserved descriptors like “excellent” and “very strong candidate” were judged to indicate a “2” rating. Lukewarm descriptors like “good” were judged to indicate a “1” rating. Although Deans’ letters are subjective and imperfect, we would point out that their stated purpose is to provide a consistent summary of a candidate’s perceived quality so that candidates can be compared by a third party. We measured interobserver reliability and found high agreement (The intraclass correlation coefficient was used as the measure of interrater reliability and indicated excellent agreement between observers in rating Dean’s letters (intraclass correlation among the 3 raters = 0.84, p < 0.001)), which argues this was indeed the case.

Several performance measures were compared between residents who passed and failed the ABOS Part I or II to determine which measures were associated with a successful outcome (ie, first time passing). We determined differences in the median number of honors and quality of the Dean’s letter between the control and failure groups for ABOS Parts I and II separately using the Mann-Whitney U-test and percentile scores (mean ± standard deviation) using the unpaired Student’s t test. Fisher’s exact test was used to compare binary proportions (membership in AOA, above average on USMLE) between the control and failure groups [11]. We then used multivariate logistic regression analysis to determine which, if any, of the variables (resident evaluation, OITE score Years 2 through 4, OITE score Year 5, AOA, USMLE Step 1, NBME Part I above average, honors during clinical years, and Dean’s letter) predicted the first-time outcome on the ABOS Part I and Part II examinations. The likelihood ratio test, used to assess significance and a backward stepwise procedure, was applied to determine the final multivariate predictors associated with examination success [9]. Most of the missing data was in the records of foreign medical graduates, where such data are not available. The missing data in American graduates were small and we did not record this data. Statistical analysis was performed using the SPSS software package (Version 16.0; SPSS Inc, Chicago, IL). Two-tailed p < 0.05 was considered significant.

Results

Predictors of success/failure on the ABOS I examination included percentile score of the OITE in the last year of training, number of honors in four selected clerkships, the Dean’s letter, the mean percentile score of the OITE (Years 2 through 4), membership in AOA, and whether the resident scored above or below the mean on the USMLE examination (Table 1). Among residents who had membership in AOA, 96% passed Part I and 98% passed Part II on the first attempt, whereas among residents who were not members of AOA, 81% passed Part I and 82% passed Part II on the first attempt. Mean percentile score on the OITE over 3 years (Years 2 through 4) was lower (p < 0.001) in the failure group than in the control group (16 versus 51). The percentile score in the fifth year of the residency program also was lower (p < 0.001) (15 versus 58) (Fig. 1). The residents who failed scored above average on the USMLE and the OITE less often than the residents who passed. The Dean’s letter predicted passing (p < 0.001) (Fig. 2). Honors in the four selected clerkships predicted performance with an increased median of 3 in the passing group as compared with 1 in the failure group (p = 0.001).

Table 1
Predictors of performance on ABOS–Part I examination (Harvard Medical School residents, 1991–2005)
Fig. 1
A comparison is shown of the OITE percentile rank score in Year 5 between residents who passed and failed based on first time testing on the ABOS Part I and Part II examinations. Logistic regression analysis indicated that the OITE Year 5 percentile score ...
Fig. 2
A comparison is shown for quality of the Dean’s letter between first time results on the ABOS Part I and Part II examinations. Multivariate analysis indicated the description (good, excellent, outstanding) was a determinant in first-time results ...

The mean percentile OITE score (Years 2 through 4) and in the last year of training, the Dean’s letter, the number of clerkship honors, and election to AOA predicted outcome on the ABOS II examination (Table 2). Among those who passed Part I on their first attempt, 49% (42 of 85) had membership in AOA compared with only 16% (two of 12) of those who failed (Fig. 3). Similarly, of the percentage of residents who passed Part II on their first attempt, 51% (42 of 82) had membership in AOA compared with only 10% (one of 10) of those who failed. The percentile score of the OITE in the fifth year of the residency program was lower (p = 0.001) for those who failed ABOS II than for those who passed (30% versus 57%). The mean percentile score of the OITE (Years 2 through 4) also was lower (p < 0.001) for those who failed than for those who passed (32% versus 50%). The number of honors in the four selected third-year clerkships also predicted performance with medians of 3 and 1 for those who passed versus those who failed (p < 0.001), although scores on the USMLE examination were not predictive of success.

Table 2
Predictors of performance on ABOS–Part II examination (Harvard Medical School residents, 1991–2005)
Fig. 3
The percentage of residents with membership in AOA was higher among those who passed ABOS Parts I and II (49% and 51%) compared with those who failed (16% and 10%) as denoted by the asterisks (p < 0.05, Fisher’s exact tests). ...

Discussion

Several studies [1, 3, 4, 7] describe potential predictors of success of orthopaedic residents on the ABOS examination, but these remain controversial. Therefore, we first asked whether objective measures (OITE scores, USMLE Step I scores, and election to AOA) would predict success on Part I of the ABOS examination and then asked whether clinical performance measures such as number of honors in key clinical clerkships and quality of Deans’ letters of recommendation would predict success on Part II of the ABOS examination.

The limitations of this retrospective study include the following missing data: seven graduates who failed Part I have not taken Part II; Dean’s letters were not available for seven graduates; the OITE scores in Years 2 through 4 were not available for seven; OITE scores in Year 5 were not available for five and one resident had scores only in Years 4 and 5; honors’ grades were not available for 16; and USMLE were not available for 10. Most of the missing data is from foreign medical school graduates who did not have Dean’s letters and honors grades (Table 3). Another limitation is the lack of information regarding the affective domain (not tested on the ABOS examinations) and ethics and professionalism (evaluated on the ABOS II and to a limited extent on the ABOS I) of residents. As suggested by Bernstein et al. [3] and Baldwin et al. [2], and as reported by Rest [14], there is some evidence of “a significant relationship between moral reasoning skills and clinical performance”. Although it is difficult to access those qualities in the affective domain, an attempt was made by Rest [14] with the development of his Defining Issue Test. In this test, the resident is asked to give his or her opinions regarding six brief stories. We are not aware of this test being used in the selection process of residents. Is it not time for program directors to investigate ways of assessing our residents on affective domain characteristics? As we continue to see the best and brightest entering orthopaedic surgery, such studies would be ideal and are long overdue.

Table 3
Performance measures of all graduated residents, 1991–2005

Proven predictors of intern performance have included USMLE Step 2 scores [1]. The strongest predictors of overall resident performance were number of honors grades during the third and fourth years of medical school, election to AOA, and faculty evaluations of resident applicants’ psychomotor performance [6, 7]. Others suggest no “correlation exists for achievements in the clinical years, for postgraduate training, or as physicians” [13] between the MCAT and subsequent practice [12] or between undergraduate grades and performance as a physician [19].

Bernstein et al. [3] surveyed orthopaedic residency program directors to identify an objective method to improve the selection process. Program directors, when selecting future residents, placed the highest value on USMLE Step 1 scores, class rank in medical school, election to AOA, whether the candidate rotated at the residency’s institution, personal appearance and performance at the interview, letters of recommendation by an orthopaedic surgeon, medical school reputation, and the Dean’s letter. Problem residents exist in most programs, questioning the validity of our selection process [16]. Simon [16] and Bernstein et al. [3] suggest program directors should obtain professional help in the selection process to access affective domain qualities to avoid selecting a problem resident. Such professional help could include a psychologist or psychiatrist, methods to select candidates with high “standards of ethics and professionalism” [3], human resource departments, or individuals similar to jury experts.

Developing a scoring system has become increasingly popular [5, 8, 17]. Interobserver reliability with such systems is high for numeric elements, low for subjective elements [5], and unreliable across different programs [8]. Gilbart et al. [8] suggested the resident selection process be changed “in a consistent manner” using factors that allow “consistency from year to year.” Successful applicants in one program were reportedly younger and had higher MCAT and NBME Part I scores and medical school grade point averages [4]. A scoring tool for orthopaedic residency selection known as the Quantitative Composite Scoring Tool (QCST) was developed at the Mayo Clinic [17]. Comparing the scores using this tool with resident performance, ie, attainment of satisfactory chief resident status, OITE scores, and passing the ABOS Part I and Part II examinations, the QCST score was the strongest predictor for OITE scores and results on ABOS Part I and Part II examinations [17]. Honors grades in the third year of medical school were most predictive of satisfactory chief resident status [17].

We looked at only objective criteria, including OITE scores, USMLE Step 1 scores, AOA election, the number of honors in key clinical clerkships and subjective criteria, including Deans’ letters and faculty evaluations of residents as predictors of successful performance on the ABOS examinations, the minimum standard to practice orthopaedic surgery in the United States. The most important factors in graduates of our residency program were the OITE score in the fifth year of training (Fig. 1), an excellent or outstanding Dean’s letter (Fig. 2), and the number of honors in the major clinical clerkships of medicine, surgery, pediatrics, and orthopaedics. Other predictors included the average percentile score on the OITE for 3 years, a USMLE Step 1 score above the mean, and election to the AOA honorary medical society. The USMLE Part I examination was a predictor for success on Part I of the ABOS examination but not Part II. A program director should be concerned about a resident about whom the faculty continues to state: “the resident needs to read more” or “study basic science more,” especially if he or she does poorly on his or her OITE.

Footnotes

Each author certifies that he has no commercial associations (eg, consultancies, stock ownership, equity interest, patient/licensing arrangements, etc) that might pose a conflict of interest in connection with the submitted article.

References

1. Andriole DA, Jeffe DB, Whelan AJ. What predicts surgical internship performance? Am J Surg. 2004;188:161–164. doi: 10.1016/j.amjsurg.2004.03.003. [PubMed] [Cross Ref]
2. Baldwin DC, Jr, Adamson TE, Self DJ, Sheehan TJ, Oppenberg AA. Moral reasoning and malpractice: a pilot study of orthopaedic surgeons. Am J Orthop. 1996;25:481–484. [PubMed]
3. Bernstein AD, Jazrawi LM, Elbeshbeshy B, Della Valle CJ, Zuckerman JD. Orthopaedic resident-selection criteria. J Bone Joint Surg Am. 2002;84:2090–2096. [PubMed]
4. Clark R, Evans EB, Ivey FM, Calhoun JH, Hokanson JA. Characteristics of successful and unsuccessful applicants to orthopedic residency training programs. Clin Orthop Relat Res. 1989;241:257–264. [PubMed]
5. Dirschl DR. Scoring of orthopaedic residency applicants: is a scoring system reliable? Clin Orthop Relat Res. 2002;399:260–264. doi: 10.1097/00003086-200206000-00033. [PubMed] [Cross Ref]
6. Dirschl DR, Campion ER, Gilliam K. Resident selection and predictors of performance. Clin Orthop Relat Res. 2006;449:44–49. [PubMed]
7. Dirschl DR, Dahners LE, Adams GL, Crouch JH, Wilson FC. Correlating selection criteria with subsequent performance as residents. Clin Orthop Relat Res. 2002;399:265–271. doi: 10.1097/00003086-200206000-00034. [PubMed] [Cross Ref]
8. Gilbart MK, Cusimano MD, Regehr G. Evaluating surgical resident selection procedures. Am J Surg. 2001;181:221–225. doi: 10.1016/S0002-9610(01)00550-5. [PubMed] [Cross Ref]
9. Katz MH. Multivariate Analysis. A Practical Guide for Clinicians. Ed 2. New York, NY: Cambridge University Press; 2006:73–116.
10. Klein GR, Austin MS, Randolph S, Sharkey PF, Hilibrand AS. Passing the boards: can USMLE and Orthopaedic in-Training Examination scores predict passage of the ABOS Part-I examination? J Bone Joint Surg Am. 2004;86:1092–1095. [PubMed]
11. Kocher MS, Zurakowski D. Clinical epidemiology and biostatistics: a primer for orthopaedic surgeons. J Bone Joint Surg Am. 2004;86:607–620. [PubMed]
12. Peterson OL, Andrews LP, Spain RS, Greenberg BG. An analytical study of North Carolina general practice 1953–1954. J Med Educ. 1956;31:1–16.
13. Reede JY. Predictors of success in medicine. Clin Orthop Relat Res. 1999;362:72–77. doi: 10.1097/00003086-199905000-00012. [PubMed] [Cross Ref]
14. Rest JR. Development in Judging Moral Issues. Minneapolis, MN: University of Minnesota Press; 1979.
15. Self DJ, Baldwin DC., Jr Should moral reasoning serve as a criterion for student and resident selection? Clin Orthop Relat Res. 2000;378:115–123. doi: 10.1097/00003086-200009000-00019. [PubMed] [Cross Ref]
16. Simon MA. The education of future orthopaedists—déjà vu. J Bone Joint Surg Am. 2001;83:1416–1423. [PubMed]
17. Turner NS, Shaughnessy WJ, Berg EJ, Larson DR, Hanssen AD. A quantitative composite scoring tool for orthopaedic residency screening and selection. Clin Orthop Relat Res. 2006;449:50–55. [PubMed]
18. White AA., 3rd Resident selection: are we putting the cart before the horse? Clin Orthop Relat Res. 2002;399:255–259. doi: 10.1097/00003086-200206000-00032. [PubMed] [Cross Ref]
19. Wingard JR, Williamson JW. Grades as predictors of physicians’ career performance: an evaluative literature review. J Med Educ. 1973;48:311–322. [PubMed]

Articles from Clinical Orthopaedics and Related Research are provided here courtesy of The Association of Bone and Joint Surgeons