|Home | About | Journals | Submit | Contact Us | Français|
Presented at the Society of General Internal Medicine Meeting in May 2002.
We performed a pre–post study of the impact of three 90-minute faculty development workshops on written feedback from encounters during an ambulatory internal medicine clerkship. We coded 47 encounters before and 43 after the workshops, involving 9 preceptors and 44 third-year students, using qualitative and semiquantitative methods. Postworkshop, the mean number of feedback statements increased from 2.8 to 3.6 statements (P = .06); specific (P = .04), formative (P = .03), and student skills feedback (P = .01) increased, but attitudinal (P = .13) and corrective feedback did not (P = .41). Brief, interactive, faculty development workshops may refine written feedback, resulting in more formative specific written feedback comments.
Feedback in ambulatory encounters is uncommon, ranging from 3.5% to 19% in various studies.1–4 Furthermore, the scant feedback delivered is generally not specific and is rarely corrective.4–7 The importance of the contribution of written feedback to the evaluation and feedback process has been increasingly recognized. Initial research suggests written feedback cards have both high completion compliance8 and potential benefits in increasing the amount of faculty-rated explicit feedback.9 Another study demonstrated a small but nonsignificant improvement in the number and quality of written feedback comments by a group of inpatient attending physicians receiving faculty development seminars. Residents felt the quality of feedback from the attendings that received feedback training was significantly better.10
In a prior study,4 we documented that the quantity and quality of specific verbal feedback in the ambulatory setting improved with faculty development seminars on evaluation and feedback based on the One Minute Preceptor11 teaching model. The verbal feedback was measured through a novel qualitative and semiquantitative technique of coding, the Teacher-Learner Interactive Assessment System.12 While the amount and quality of verbal feedback improved and some aspects of preceptor self-perception improved, learner satisfaction remained unchanged. Building on that work, we performed the current study to characterize the content of written feedback in the outpatient clinic, and discern whether the amount and quality of written feedback also could be improved after our faculty development program.
We studied the effect of an ambulatory faculty development workshop on written feedback during a third-year outpatient medicine clerkship. Our intervention consisted of three 90-minute ambulatory faculty development seminars scheduled 1 week apart. The seminars were 30-minute minilectures, an interactive discussion about a videotaped simulated teaching encounter, and role-plays. The first session focused on the One-Minute Preceptor, and ambulatory teaching goals in general, the second on methods of evaluation in the ambulatory setting, and the third on the characteristics of quality feedback and how to provide it. During the third session, a specific 20-minute block of instruction covering effective written feedback was provided. We stressed the need to be specific and interactively discussed improving written feedback with suboptimal feedback examples.
Routinely during this clerkship, preceptors fill out 3″ × 5″ cards rating student performance after each encounter. Written feedback for 3 months before and after faculty participation in an ambulatory teaching program was compared. After each encounter, teachers and learners were also asked several Likert-type questions on the amount and quality of feedback provided. Survey questions did not differentiate between verbal and written feedback. This study was performed in conjunction with assessment of verbal teacher–learner interactions, including audiotapes, described elsewhere.4 Students were aware that a study of verbal and written feedback was ongoing, but not that a faculty development workshop was taking place. Our local human use committee approved this protocol and informed consent was obtained from faculty participants.
Photocopies of the cards were transcribed and were stripped of references to students, preceptors, and patients as well as whether they were collected before or after the faculty workshop. The unit of analysis for coding was a discrete statement. Based on a systematic review of medical education literature, we developed a preliminary coding scheme, and then 2 academic general internists (PGO, SMS) reviewed the transcribed cards and independently coded the feedback statements using QSR NUD*IST software (version 4.0, Qualitative Solutions and Research Corp, Melbourne, Australia). The two coders discussed discrepancies in the coding, reaching consensus on the meaning and application of each code. Each cycle produced a slightly revised system that was independently applied by each reviewer to another transcript. After several cycles, a complete coding system was developed, with feedback broadly classified as formative or summative (Table 1). Formative feedback was subcategorized as knowledge, skill, or attitudes. In addition, feedback was coded as general, specific, positive, or corrective (Table 2). Once the complete coding system was developed, each rater independently coded encounters with good interrater reliability (Spearman's ρ: 0.86).
Because of concern about clustering of results owing to a differential effect of the intervention on individual attendings, we analyzed the data using regression modeling with the Huber/White/sandwich estimator of variance.13 This method of analysis adjusts for observations that are clustered and is based on the assumption that observations are independent between the clusters (here the individual attending) but not within.
Nine of 11 preceptors and all 44 medical students rotating on the service during this time period participated. Faculty participants were 78% male, board-certified internists, and attended 100% of the seminars. There were 50 consecutive ambulatory encounters before and after the faculty development seminars, with 91% of the cards before and 86% after the intervention returned ([average number of cards per preceptor: before: 5.2][range 3–9], after: 4.8 [range 3–6]).
All but one preceptor increased the absolute number of feedback comments on their cards. The average number of comments on each card increased from 2.8 to 3.6 statements after the seminars. However, this increase was not significant after adjusting for clustered analysis (P = .06). Overall, formative feedback was more common than summative feedback and increased after the seminars from 57% to 77% (P = .03). The most common formative feedback commented on student fund of knowledge regarding patient management decisions or differential diagnosis and comprised approximately a third of written feedback statements before and after the seminars (P = .91). Formative feedback on student skills increased from 18% to 34% after the seminars (P = .009). The most common attributes of skills receiving preceptor comments included history taking, physical examination, and oral presentation. Summative feedback decreased from 43% to 23% after seminars (P = .03), and was generally limited to providing the student's grade for the teaching encounter: 7% of students were classified as reporters, 46% as interpreters, 45% as managers, and 2% as educators.
Written feedback on attitudes was uncommon, comprising 8% of comments before and 9% after the seminars (P = .13). The most common type of attitudinal feedback was on the doctor–patient relationship and professionalism. Only 2 (1.5%) attitudinal feedback statements before and 2 (1.4%) feedback statements after the seminars were corrective. These corrective statements commented on student enthusiasm and work ethic.
Feedback was further classified as general or specific and as positive or corrective. Most feedback was general, though the amount of feedback linked to specific student behaviors increased from 22% to 38% of statements after the seminars (P = .04). The majority of the written feedback was positive, with only 17% before and 25% after the seminars classified as corrective (P = .41). Of corrective feedback statements, most dealt with student knowledge and skills.
Preceptors felt that their feedback was more specific and linked to student behaviors after the workshops (5.4–6.0, P = .03), though their perception that their amount of feedback was appropriate was unchanged (5.8–6.1, P = .98). Student satisfaction with the appropriateness of the amount of feedback (averaging 6.3 on a 7-point scale before and after) and that feedback was concrete and linked to specific behaviors was high (6.3 before and after) and unchanged.
We found that preceptors participating in ambulatory faculty development seminars increased their specific written feedback. The proportion of feedback spent “just providing the grade” decreased as specific formative feedback, linked to specific student behaviors, increased. This is important, since learners rate preceptor feedback specifically linked to concrete events and providing suggestions for improvement as most helpful.5,14–16 Moreover, our rates of 3″ × 5″ card return were high (>85%) both before and after the seminars, suggesting that the documentation requirements imposed by feedback cards are reasonable in a fast-paced ambulatory setting.
Written feedback has received increasing attention as a technique to supplement verbal feedback.8–10 Holmboe et al. randomly delivered training on written evaluation and feedback at the beginning of a medicine inpatient rotation and found that just over half of written comments were “dimension-specific,” similar to our formative feedback category. They found lower rates of specific feedback (8%), unchanged with intervention.10 In contrast, we found a baseline 22% rate of specific feedback, increasing to 38% after our intervention. Their faculty development intervention consisted of a single 20-minute lecture on providing effective written feedback, compared with our three interactive 90 minute seminars spread over several weeks.
Verbal feedback is rarely specific, comprising only 1% to 3% of total teacher utterances in ambulatory encounters.4 Corrective verbal feedback is also uncommon, ranging from 0% to 11%.1,2,4 In our study, written corrective feedback was more frequent (17%), improving to 25% after the seminars. Having preceptors deliver formative comments in writing may allow the learner to receive more balanced positive and corrective feedback. Unfortunately, it is unclear how much of this written feedback reaches our learners, even though they have the opportunity to view the feedback cards in their training files. It is important that any written feedback system be designed to ensure that the students receive this feedback in a timely manner.
While preceptors provided more balanced feedback in regard to knowledge and skills after the seminars, discussion of student attitudes remained uncommon. Delivering feedback on learner attributes such as the physician–patient relationship, integrity, enthusiasm, and professionalism has been a long-acknowledged problem area for preceptors.5 Further efforts are needed to define faculty development interactions that can increase feedback in this critical area of learner development.
Our study has several limitations. First, we used a small number of preceptors, all of whom had completed the Stanford Faculty Development program. This may have made them more receptive to faculty development and may limit generalizability. While learner and preceptor satisfaction was high, our surveys did not differentiate between verbal and written feedback. Our prepost study design may introduce bias since the preceptors knew their feedback cards were being analyzed. Finally, our results are limited to a short-term time frame. Previous research has shown the effect of faculty development programs may diminish over time.3
We conclude that using feedback cards in the ambulatory setting collected after individual teaching sessions may be a useful adjunct to verbal feedback in obtaining specific examples of student performance. The quality of written feedback is quantifiable, and can improve after faculty development. Formative written feedback on student attitudes remains uncommon and further studies are needed to explore techniques to improve feedback in this area.