We studied the effect of an ambulatory faculty development workshop on written feedback during a third-year outpatient medicine clerkship. Our intervention consisted of three 90-minute ambulatory faculty development seminars scheduled 1 week apart. The seminars were 30-minute minilectures, an interactive discussion about a videotaped simulated teaching encounter, and role-plays. The first session focused on the One-Minute Preceptor, and ambulatory teaching goals in general, the second on methods of evaluation in the ambulatory setting, and the third on the characteristics of quality feedback and how to provide it. During the third session, a specific 20-minute block of instruction covering effective written feedback was provided. We stressed the need to be specific and interactively discussed improving written feedback with suboptimal feedback examples.
Routinely during this clerkship, preceptors fill out 3″ × 5″ cards rating student performance after each encounter. Written feedback for 3 months before and after faculty participation in an ambulatory teaching program was compared. After each encounter, teachers and learners were also asked several Likert-type questions on the amount and quality of feedback provided. Survey questions did not differentiate between verbal and written feedback. This study was performed in conjunction with assessment of verbal teacher–learner interactions, including audiotapes, described elsewhere.4
Students were aware that a study of verbal and written feedback was ongoing, but not that a faculty development workshop was taking place. Our local human use committee approved this protocol and informed consent was obtained from faculty participants.
Photocopies of the cards were transcribed and were stripped of references to students, preceptors, and patients as well as whether they were collected before or after the faculty workshop. The unit of analysis for coding was a discrete statement. Based on a systematic review of medical education literature, we developed a preliminary coding scheme, and then 2 academic general internists (PGO, SMS) reviewed the transcribed cards and independently coded the feedback statements using QSR NUD*IST software (version 4.0, Qualitative Solutions and Research Corp, Melbourne, Australia). The two coders discussed discrepancies in the coding, reaching consensus on the meaning and application of each code. Each cycle produced a slightly revised system that was independently applied by each reviewer to another transcript. After several cycles, a complete coding system was developed, with feedback broadly classified as formative or summative (). Formative feedback was subcategorized as knowledge, skill, or attitudes. In addition, feedback was coded as general, specific, positive, or corrective (). Once the complete coding system was developed, each rater independently coded encounters with good interrater reliability (Spearman's ρ: 0.86).
Definition of Selected Types of Written Feedback
Categorization of Feedback before and after Faculty Development Seminars*
Because of concern about clustering of results owing to a differential effect of the intervention on individual attendings, we analyzed the data using regression modeling with the Huber/White/sandwich estimator of variance.13
This method of analysis adjusts for observations that are clustered and is based on the assumption that observations are independent between the clusters (here the individual attending) but not within.