PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-3 (3)
 

Clipboard (0)
None
Journals
Authors
Year of Publication
Document Types
1.  A prospective randomized trial of content expertise versus process expertise in small group teaching 
BMC Medical Education  2010;10:70.
Background
Effective teaching requires an understanding of both what (content knowledge) and how (process knowledge) to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise to those with content expertise on medical students' learning outcomes in a structured small group environment.
Methods
One hundred and fifty-one first year medical students were randomized to 11 groups for the small group component of the Cardiovascular-Respiratory course at the University of Calgary. Each group was then block randomized to one of three streams for the entire course: tutoring exclusively by physicians with content expertise (n = 5), tutoring exclusively by physicians with process expertise (n = 3), and tutoring by content experts for 11 sessions and process experts for 10 sessions (n = 3). After each of the 21 small group sessions, students evaluated their preceptors' teaching with a standardized instrument. Students' knowledge acquisition was assessed by an end-of-course multiple choice (EOC-MCQ) examination.
Results
Students rated the process experts significantly higher on each of the instrument's 15 items, including the overall rating. Students' mean score (±SD) on the EOC-MCQ exam was 76.1% (8.1) for groups taught by content experts, 78.2% (7.8) for the combination group and 79.5% (9.2) for process expert groups (p = 0.11). By linear regression student performance was higher if they had been taught by process experts (regression coefficient 2.7 [0.1, 5.4], p < .05), but not content experts (p = .09).
Conclusions
When preceptors are physicians, content expertise is not a prerequisite to teach first year medical students within a structured small group environment; preceptors with process expertise result in at least equivalent, if not superior, student outcomes in this setting.
doi:10.1186/1472-6920-10-70
PMCID: PMC2966459  PMID: 20946674
2.  Involvement in teaching improves learning in medical students: a randomized cross-over study 
Background
Peer-assisted learning has many purported benefits including preparing students as educators, improving communication skills and reducing faculty teaching burden. But comparatively little is known about the effects of teaching on learning outcomes of peer educators in medical education.
Methods
One hundred and thirty-five first year medical students were randomly allocated to 11 small groups for the Gastroenterology/Hematology Course at the University of Calgary. For each of 22 sessions, two students were randomly selected from each group to be peer educators. Students were surveyed to estimate time spent preparing as peer educator versus group member. Students completed an end-of-course 94 question multiple choice exam. A paired t-test was used to compare performance on clinical presentations for which students were peer educators to those for which they were not.
Results
Preparation time increased from a mean (SD) of 36 (33) minutes baseline to 99 (60) minutes when peer educators (Cohen's d = 1.3; p < 0.001). The mean score (SD) for clinical presentations in which students were peer educators was 80.7% (11.8) compared to77.6% (6.9) for those which they were not (d = 0.33; p < 0.01).
Conclusion
Our results suggest that involvement in teaching small group sessions improves medical students' knowledge acquisition and retention.
doi:10.1186/1472-6920-9-55
PMCID: PMC2739196  PMID: 19706190
3.  Can standardized patients replace physicians as OSCE examiners? 
Background
To reduce inter-rater variability in evaluations and the demand on physician time, standardized patients (SP) are being used as examiners in OSCEs. There is concern that SP have insufficient training to provide valid evaluation of student competence and/or provide feedback on clinical skills. It is also unknown if SP ratings predict student competence in other areas. The objectives of this study were: to examine student attitudes towards SP examiners; to compare SP and physician evaluations of competence; and to compare predictive validity of these scores, using performance on the multiple choice questions examination (MCQE) as the outcome variable.
Methods
This was a cross-sectional study of third-year medical students undergoing an OSCE during the Internal Medicine clerkship rotation. Fifty-two students rotated through 8 stations (6 physician, 2 SP examiners). Statistical tests used were Pearson's correlation coefficient, two-sample t-test, effect size calculation, and multiple linear regression.
Results
Most students reported that SP stations were less stressful, that SP were as good as physicians in giving feedback, and that SP were sufficiently trained to judge clinical skills. SP scored students higher than physicians (mean 90.4% +/- 8.9 vs. 82.2% +/- 3.7, d = 1.5, p < 0.001) and there was a weak correlation between the SP and physician scores (coefficient 0.4, p = 0.003). Physician scores were predictive of summative MCQE scores (regression coefficient = 0.88 [0.15, 1.61], P = 0.019) but there was no relationship between SP scores and summative MCQE scores (regression coefficient = -0.23, P = 0.133).
Conclusion
These results suggest that SP examiners are acceptable to medical students, SP rate students higher than physicians and, unlike physician scores, SP scores are not related to other measures of competence.
doi:10.1186/1472-6920-6-12
PMCID: PMC1397828  PMID: 16504145

Results 1-3 (3)