|Home | About | Journals | Submit | Contact Us | Français|
Little evidence exists to support the value of reflection in the clinical setting.
To determine whether reflecting and revisiting the “patient” during a standardized patient (SP) examination improves junior medical students’ performance and to analyze students’ perceptions of its value.
Students completed a six-encounter clinical skills examination, writing a guided assessment after each encounter to trigger reflection. SPs evaluated the students with Medical Skills and Patient Satisfaction checklists. During the last three encounters, students could opt to revisit the SP and be reevaluated with identical checklists.
One hundred and forty-nine third year medical students.
Changes in scores in the Medical Skills and Patient Satisfaction checklists between first visit and revisit were tested separately per case as well as across cases.
On the medical skills and patient satisfaction checklists, mean revisit scores across cases were significantly higher than mean first visit scores [12.6 vs 12.2 (pooled SD=2.4), P=.0001; 31.2 vs 31.0 (pooled SD=3.5), P=.0001)]. Sixty-five percent of the time, students rated “reflect–revisit” positively, 34% neutrally, and 0.4% negatively. Five themes were identified in the positive comments: enhancement of (1) medical decision making, (2) patient education/counseling, (3) student satisfaction/confidence, (4) patient satisfaction/confidence, and (5) clinical realism.
Offering third year medical students the option to reflect and revisit an SP during a clinical skills examination produced a small but nontrivial increase in clinical performance. Students perceived the reflect–revisit experience as enhancing patient-centered practices (counseling, education) as well as their own medical decision making and clinical confidence.
“While feedback is not used often enough, reflection is used less.” W. T. Branch (Acad Med, 77(12):1185–1188, 2002)
The value of reflection in improving clinician patient understanding and clinician decision making is described in recent medical literature.1–3 Branch notes that reflection in medicine “include[s] consideration of the larger context, the meaning, and the implications of an experience or action” (p. 1185).1 Reflection as a door to “the larger context” is also described in education literature.4,5 Educators emphasize its potential to be transformative—to expand the minds of learners and emancipate them from presuppositions.5,6
In addition to its transformative potential, educators also emphasize reflection’s role in the context of everyday practice. It is seen as an aid to task-oriented problem solving—as a way to “look back to check on whether we have identified all relevant options for action” (p. 7).6 In his treatment of reflection, Schön7 particularly focuses on translating it into practice. He calls the critical review, occurring after physicians leave the examining room, reflection-on-action (distinguishing it from reflection-in-action, the thinking on your feet, which occurs in the examining room). He and others suggest that reflection-on-action leads to new action—to acting on reflection. Killion and Todnem8 extended Schön’s work to describe how reflection can facilitate continuous improvement in practice. In fact, the interdependent relationship between reflection and action is formulated by Kolb5 as a cycle, which forms the heart of the learning process.9 For Kolb, effective learners move through the processes of concrete experiences, reflective observation, abstract conceptualization, and active experimentation as they solve everyday problems.
Despite the attention paid to reflection in the educational and medical literature, clinical outcomes of the reflective process have not been well studied and little empirical evidence exists to support its impact on performance.10 This paper reports the results of a study designed to assess the value of reflection in a simulated clinical context by introducing a “reflect–revisit” paradigm into a traditional standardized patient (SP) assessment for third year medical students. The purpose of this study was to see if the reflect–revisit paradigm improved their clinical performance and to analyze their perceptions of this process. If this paradigm led to improved clinical performance, it could have important implications for the way we prepare doctors-in-training for clinical practice.
In 2004 at the conclusion of their third year, 149 medical students at The George Washington University (GW) School of Medicine and Health Sciences participated in this study as part of a required six-station SP formative assessment. This study was approved by the GW Institutional Review Board.
The SP examination consisted of basic clinical skills cases representing the spectrum of third year clerkship experiences. Cases were taken from a variety of case books developed at a number of different institutions and consortia (Table 1). Twenty-four SPs received 6 hours of training from an experienced trainer. Four SPs were trained on each case and performed the case in rotation. Because no crosstraining occurred, each student encountered six different SPs.
Medical students were told that this assessment was formative, that they would receive feedback and they would be expected to design a plan to address deficiencies.
After every encounter with medical students, SPs completed two evaluation checklists: (1) medical skills, specific to each case, for history and physical examination behaviors; and (2) patient satisfaction, identical for all cases, for global impressions of interpersonal skills. The medical skills Checklists contained 12–21 binary scale items determined by the case creators to be most important to the performance of the cases. The checklists were reviewed and modified by consensus by physician panel from the Baltimore-Washington Consortium of Medical Schools. The patient satisfaction checklist contained nine items on a Likert scale (1=poor, 5=excellent) and was adapted from the American Board of Internal Medicine Patient Satisfaction Questionnaire.11,12
Third year students were arbitrarily assigned to their rotation sequence in cohorts of six, based on alphabetized clerkship groupings. They rotated through the six different SP cases described in Table 1. To simulate real medical practice, students had the option to revisit their last three cases. Because each student in a cohort started with a different case, each rotated through the six cases in a different sequence and their last three cases varied accordingly. For students who began with case one, for example, their last three cases were four, five, and six; for students who began with case four, their last three cases were one, two, and three. This rotation scheme ensured that all cases were open to revisit.
Students completed their first three stations in the standard manner: 15 minutes (maximum) for the SP encounter followed by 5 minutes to complete a guided assessment of the patient. This guided assessment, intended to elicit reflection, asked students to write a differential diagnosis, pertinent positive and negative supportive findings, and a plan for further assessment. While each student completed the guided assessment, each SP assessed the student by completing the two checklists.
For their last three stations, the same procedure was followed except that students could opt to revisit their SPs for an additional 5 minutes. For students who revisited, SPs addressed the same checklists a second time, amending them as warranted. Thus, each student who revisited an SP received a set of first visit ratings as well as a set of revisit ratings on both the medical skills and patient satisfaction checklists. When revisiting, students were unaware of their first visit scores.
Finally, after each of their last three stations, all students were asked to complete an evaluation questionnaire. For students who chose to revisit, the questionnaire had two parts: (1) a Likert scale evaluation of the revisit experience; and (2) two open-ended questions asking them to comment on their positive and negative reactions to the experience. For students who chose not to revisit, the questionnaire asked them why they made that decision.
Changes in scores in the Medical Skills and Patient Satisfaction Checklists between first visit and revisit were tested separately for each of the six cases, as well as globally across the cases, using the Wilcoxon Signed-Rank Test. Analyses were restricted to encounters where the student was offered and accepted the option to revisit. Statistical tests for the six separate cases used the encounter as the unit of analysis. For the global test, scores were first averaged across encounters for each student to avoid violating the statistical independence assumption of the significance test. Prediction of revisit was done by simple logistic regression.
Qualitative methods were used to analyze the data collected from the evaluation questionnaires completed by the students at the end of the last three encounters. Through inductive analysis the investigators searched the data for clusters and patterns of meaning.13 Researchers independently coded a portion of the data to develop initial codes. Codes were defined through consensus. Open codes led to categories and finally to the development of themes as described by Strauss and Corbin.13 Student comments were coded to the identified themes. The following major steps were taken to maximize the credibility and trustworthiness of the results of this study: (1) two researchers were used to independently analyze the raw data; (2) interrater reliability measures (kappa statistic) were used to ensure reliability of coding schema; (3) two researchers acted as “devil’s advocates” throughout the process to rule out alternative hypotheses; (4) obtaining and reporting low inference data (i.e., verbatim quotes) were used to support the themes that emerged; and (5) a third researcher was used as a peer reviewer to confirm the accuracy of the results obtained.14
The 149 students had three revisit options, resulting in 447 maximum possible revisits. Because of scheduling difficulties and incomplete sessions, the actual number of possible revisits recorded was 430. Of the 430 revisit opportunities, 273 (63%) were accepted and 157 (37%) were declined. Of 149 students taking the examination, 131 (88%) chose to revisit at least once, and only 18 (12%) chose never to revisit.
For encounters where students opted to revisit, medical skills checklist mean scores increased by 0.4 checklist items across the six cases (P=.0001) (Table 2). In each of the six cases, mean scores improved and these changes were all statistically significant.
Patient satisfaction checklist scores increased by 0.2 points across all six cases (P<.0001); however, only four of the six cases showed statistically significant changes. Moreover, mean changes for each case were all small in relationship to the scale range (9–45) and standard deviations.
An alternative way of describing medical skills checklist score changes is in terms of the number of encounters where scores improved by at least one item. Score increases by one or more items occurred in 68 out of 273 encounters. Maximum score increase in an encounter was five checklist items (Table 3).
In a secondary analysis to explore potential influences on students’ decision to revisit, we first looked at the effect of case and then at the effect of first visit scores.
Students did not choose to revisit all cases equally. They opted to revisit at rates varying from a low of 51% for the headache case to a high of 77% for the depression case (Table 2).
With regard to first visit scores, there was a significant inverse relationship between the initial score and the amount of improvement on the revisit score (r=−0.31, P<.001). Students who scored lower on the initial visit improved the most on that encounter. To see if there was a relationship between first visit score and the decision to revisit, logistic regression analyses were calculated for each of the six cases, predicting revisit from initial scores. None of the six equations, however, reached statistical significance. Odds ratios ranged from 1.01 to 1.05.
Students rated each encounter on a five-point Likert scale as follows: 65% of the encounters (n=176) evoked positive or strongly positive reactions; 34% neutral (n=94); 0.4% (n=1) negative. No student rated the experience as strongly negative. On two occasions, the students did not rate the experience (0.8%).
In the open-ended section of the questionnaire, those who chose to revisit were asked to comment on any positive and negative reactions to the experience. Excellent interrater agreements (kappa=.94 for positive comments and 0.90 for negative comments) were established for the two coding schemas developed.15
Five major themes emerged from the analysis of the positive comments of students who chose to revisit their SPs: (1) enhanced clinical decision making (n=137); (2) enhanced patient education/counseling (n=82); (3) enhanced student satisfaction/confidence (n=47); (4) enhanced patient satisfaction/confidence (n=40); and (5) enhanced clinical realism (n=12). Table 4 provides sample quotations illustrating the positive themes that emerged.
Only 16% of the 273 revisits generated negative comments. From those comments, three major themes emerged: (1) decreased student confidence/satisfaction (n=11); (2) perceived negative impact on the patient (n=7); and (3) perceived as unnecessary (n=6). Additional comments were reported as “others” (n=19). Table 5 provides sample quotations illustrating the negative themes that emerged.
In the 174 instances in which students chose not to revisit, analysis of their explanations revealed two major themes: (1) no additional medical information was needed for clinical decision making (n=154); and (2) patient-related issues had already been addressed sufficiently (n=22). Table 6 provides sample quotes as evidence of each theme.
This study provides quantitative and qualitative evidence that reflecting and revisiting may enhance the performance of third year medical students on a clinical skills examination. Participation in the reflection and revisit option was high. Of students taking the examination, 88% chose to revisit at least once. In our main quantitative analysis, we tested the hypothesis that for students who reflected and revisited, medical skills and patient satisfaction checklist scores would improve. For those students, this analysis demonstrated a small but statistically significant mean score increase from first visit to revisit on the medical skills checklists, both overall and for each case. While there were some statistically significant improvements on the patient satisfaction checklists, these improvements were very small and, and in contrast to the medical skills checklists, not consistent across cases. The degree of difference in the results between the medical skills and patient satisfaction checklists is not surprising. The patient satisfaction inventory contained items unlikely to change with revisit (e.g., “greeted me warmly”). During the revisit, students would be more likely to gain extra history and physical exam points than to alter the SP’s general impression of their interpersonal skills.
Given that the medical skills checklists contained only 12–21 items carefully selected to be vital to the case, an improvement of even one history or physical examination item could be clinically important. Thus, though the degree of improvement after reflect–revisit was small, small improvements in the context of this examination may not be trivial and improvement was noted in 68 out of 273 (25%) of the revisit encounters.
In analyzing the influences on the student’s decision to revisit, the case seemed to exert an effect. Students did not choose to revisit all cases equally, suggesting that their perception of case difficulty may have influenced their revisit decision. The impact of the first visit score on the choice to revisit was more complex. It seemed plausible that low scorers might be aware of their initial poor performance and were thus more likely to revisit. Logistic regression analyses calculated for each of the six cases, however, did not bear this out. In fact, some students with perfect scores opted to revisit while some students with scores of 60% or less declined revisit. This pattern is in accord with the reported tendency of high-end students to underestimate and low-end students to overestimate their performances.16 Further understanding of motivational factors in the revisit decision (e.g., student confidence level, perception of case difficulty, and decision-making style) would be important to explore in future studies.
Although low scores did not significantly predict a decision to revisit, students who scored lowest tended to profit most from the revisit. This improvement could be attributed to statistical artifact (“ceiling effect” or “regression to the mean”), but there are other potential explanations. Some students may think well “on their feet” and some may perform best after deliberation. If so, it would be important to identify these student populations and support their cognitive styles. Alternatively, differences in performance might reflect differences in cognitive development. Those who think well on their feet may be more advanced in their clinical reasoning skills for the specific case.17 Those who score low and improve most after reflection may be less advanced in their clinical reasoning skills or less familiar with the clinical features of the particular case. If so, individualized mentoring of these “low scorers,” after analysis of their guided reflections, could enhance their ability to generate hypotheses early in the encounter and to become more accomplished at reflecting in action.7
In our qualitative analysis, the results of the evaluation questionnaire further supported the value of the reflect–revisit paradigm. Most students who chose to revisit (65%) rated the revisit experience positively. It was no surprise that students perceived that the experience improved their medical decision making and their own clinical confidence. However, the large number of students who saw it as enhancing patient-centeredness was especially interesting. Comments emphasizing patient satisfaction, shared decision making, and clinician–patient rapport suggest clinical value for reflect–revisit because these factors are linked in the literature to improved patient health outcomes.18–21
Themes identified in the qualitative analysis suggest value of the paradigm within a larger educational and clinical context. The Accreditation Council on Graduate Medical Education (ACGME) names “gather essential and accurate information,” “analytical thinking approach,” “counsel and educate patients,” and “responsiveness to the needs of patients” as competency objectives for residency training.22 Themes expressed by the students fall squarely within the boundaries of these objectives. If the student’s self-perceptions expressed in these themes are accurate, reflect–revisit moves students in educational directions highly prioritized by the ACGME and prepares them for their future role as residents.
This pilot study has important limitations. Score increases in the revisit cases might be explained by factors other than reflect–revisit, such as acclimatization to the exam, relaxed time pressure on the revisit stations, regression to the mean, or test–retest bias. Although it is not possible to tease out their degree of contribution, these factors warrant attention in future studies. The score from the medical skills checklist is only a surrogate for medical decision making, limiting the conclusions that can be drawn from the study. The study design was biased against losing points when revisiting a case, which may have resulted in inflated mean revisit scores. Students often chose not to revisit; the findings might have been weaker if the study design had required all students to revisit. Finally, this study was conducted with third year medical students at a private, east coast medical school, limiting its generalizability.
This preliminary study supports the value of reflection and revisiting the “patient” within the context of a particular SP examination. Additional studies are needed to confirm and extend its findings. Future studies could measure the effect of reflect–revisit on end points more closely linked to clinical outcomes, use more challenging clinical scenarios, use more sophisticated reflection protocols, develop a predictive model to identify those students who might benefit most from the reflect–revisit paradigm, or study actual reflect–revisit practices of clinicians.
Although physicians in the real world sometimes revisit their patients after reflecting to clarify data and minimize errors, the reflect–revisit paradigm is not routinely taught in the medical school curriculum, nor is it integrated into SP assessment of students’ clinical skills. Our reflect–revisit approach during a routine SP assessment was well received and provided opportunity for improved performance. Incorporating reflect–revisit and other reflection paradigms into the curriculum could, in the transformative mode of reflection-on-action, alter the way we teach, evaluate, and practice the medical visit.
The authors gratefully acknowledge Karen Richardson-Nassif, PhD, and Richard Riegelman, MD, PhD for reviewing this manuscript and making valuable suggestions. They also wish to thank Afifa Kouj, PhD for her biostatistical contribution and Laura Abate, MA for her library research. Finally, the authors wish to acknowledge the generous support of The George Washington University Clinical Learning and Simulation Skills (CLASS) Center.
Potential Financial Conflicts of Interest None disclosed.