To reduce inter-rater variability in evaluations and the demand on physician time, standardized patients (SP) are being used as examiners in OSCEs. There is concern that SP have insufficient training to provide valid evaluation of student competence and/or provide feedback on clinical skills. It is also unknown if SP ratings predict student competence in other areas. The objectives of this study were: to examine student attitudes towards SP examiners; to compare SP and physician evaluations of competence; and to compare predictive validity of these scores, using performance on the multiple choice questions examination (MCQE) as the outcome variable.
This was a cross-sectional study of third-year medical students undergoing an OSCE during the Internal Medicine clerkship rotation. Fifty-two students rotated through 8 stations (6 physician, 2 SP examiners). Statistical tests used were Pearson's correlation coefficient, two-sample t-test, effect size calculation, and multiple linear regression.
Most students reported that SP stations were less stressful, that SP were as good as physicians in giving feedback, and that SP were sufficiently trained to judge clinical skills. SP scored students higher than physicians (mean 90.4% +/- 8.9 vs. 82.2% +/- 3.7, d = 1.5, p < 0.001) and there was a weak correlation between the SP and physician scores (coefficient 0.4, p = 0.003). Physician scores were predictive of summative MCQE scores (regression coefficient = 0.88 [0.15, 1.61], P = 0.019) but there was no relationship between SP scores and summative MCQE scores (regression coefficient = -0.23, P = 0.133).
These results suggest that SP examiners are acceptable to medical students, SP rate students higher than physicians and, unlike physician scores, SP scores are not related to other measures of competence.