PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
 
J Gen Intern Med. 2006 May; 21(5): 410–414.
PMCID: PMC1484803

Measuring Outcomes of a One-Minute Preceptor Faculty Development Workshop

Elizabeth Eckstrom, MD, MPH,1 Lou Homer, MD, PhD,2 and Judith L Bowen, MD3

Abstract

BACKGROUND

Measuring outcomes of faculty development programs is difficult and infrequently attempted beyond measuring participant satisfaction with the program. Few studies have validated evaluation tools to assess the effectiveness of faculty development programs, and learners have rarely participated in assessing improvement of faculty who participate in such programs.

OBJECTIVE

To develop a questionnaire to measure the effectiveness of an enhanced 1-minute preceptor (OMP) faculty development workshop via faculty self-assessment and resident assessment of faculty, and to use the questionnaire to assess an OMP faculty development workshop.

DESIGN AND MEASUREMENTS

We developed and tested a questionnaire to assess the 5 “microskills” of a OMP faculty development program, and performed faculty self-assessment and resident assessment using the questionnaire 6 to 18 months before and 6 to 18 months after our experiential skills improvement workshop.

PARTICIPANTS

Sixty-eight internal medicine continuity clinic preceptors (44 control and 24 intervention faculty) at a university, a veteran's affairs hospital, and 2 community internal medicine training sites.

RESULTS

Twenty-two participants (92%) completed pre- and postintervention questionnaires. Residents completed 94 preintervention questionnaires and 58 postintervention questionnaires on participant faculty. Faculty reported improvement in behavior following the intervention. Residents reported no significant improvements in faculty teaching behaviors following the intervention.

CONCLUSION

We attempted to rigorously evaluate a faculty development program based on the OMP. Although the intervention did not show statistically significant changes in teaching behavior, we believe that this study is an important step in extending assessment of faculty development to include resident evaluation of participating faculty.

Keywords: faculty development, 1-minute preceptor, clinical teaching, graduate medical education, ambulatory education, skills assessment

Most medical schools and residency programs include structured faculty development for their teaching and research faculty and many professional societies include some faculty development in teaching skills in their annual meetings.1,2 Some faculty development programs have assessed their effectiveness via evaluation of participant satisfaction with the program and assessment of skills learned, but rarely have resident or student evaluations of faculty been included in the assessments of these programs.35 Moreover, few tools have been validated to evaluate faculty development programs. One such tool was developed by Litzelman et al.6 to test the construct validity of an educational framework for the Stanford Faculty Development Program.

We developed an evaluation tool to measure teaching skills' improvement and assess workshop outcomes via resident evaluations of faculty for a faculty development workshop teaching the 1-minute preceptor (OMP). The OMP is a teaching technique that has been taught in faculty development workshops in family medicine and internal medicine.7 It involves teaching faculty participants 5 microskills to improve their abilities in diagnosing learners' knowledge and understanding of clinical cases and directing the faculty to provide feedback, correct mistakes, and teach general rules based on their assessments of learners (see Appendix 1). This framework is thought to be an effective and efficient method for teaching students and residents in the ambulatory setting and has been widely disseminated.8,9 Studies have assessed incorporation of the OMP into teaching encounters10 and measured student ratings of residents after an OMP workshop,11 but none have included resident assessment of faculty following an OMP workshop. We designed a half-day workshop that teaches the OMP using standardized learners (SL) and highly scripted cases to enhance the effectiveness of the workshop.12 This manuscript reports the development of the questionnaire we used to assess faculty teaching skills, the results of the behavior component of the workshop intervention, and discusses some of the challenges encountered in performing this outcomes assessment.

METHODS

Study Population

Our study population included all ambulatory preceptors in internal medicine resident continuity clinics at 2 training programs (these included the university hospital, a VA, and 2 community sites). Preceptors were invited to attend the workshops, and in many cases, precepting coverage was provided to encourage attendance. Participation was voluntary. All institutional review boards approved the study.

Study Design

We used a nonrandomized but controlled pre-post study design with a convenience sample of outpatient preceptors from the 4 institutions. Each faculty participated in 1 workshop, and workshops were given every 6 months until 3 workshops had been completed to maximize faculty participation. Six to 12 faculty participated in each workshop. Faculty chose to participate (“P” faculty) and chose when to participate in the workshop. Faculty who chose never to participate in the workshops were designated control (“C”) faculty. Residents from all continuity clinics completed anonymous evaluations of their preceptors, the study faculty. Residents worked with both control and intervention faculty, and were not told which of their faculty had participated in the workshops. Because residents chose which faculty to assess, they also completed assessments of the study investigators (designated “G” faculty). There were 6 G faculty, all with extensive experience using the OMP and significant ambulatory precepting responsibilities.

The Enhanced OMP Workshop

Our workshops are described in detail elsewhere.12 We taught each of the 5 microskills of the OMP during brief didactic sessions. Most of the workshop time was devoted to case-based practice. We trained chief residents and general medicine fellows to act as SL.13,14 Workshop participants divided into groups of 2 plus an SL. Participants responded to the SL's case presentations, practicing their use of the OMP skills in teaching their SL.

Survey Tools

We collected pre- and postfaculty self-evaluations of their skills using the OMP. Residents completed anonymous evaluations of their continuity preceptors every 6 months for 2 years for a total of 4 evaluation periods. The initial resident evaluations were collected before the start of the OMP workshops and the final evaluations were collected 6 months after the last workshop. These evaluations spanned our faculty development program and served as pre- and postresident evaluations of the faculty participants; evaluations of nonparticipating faculty served as evaluations of control faculty. For faculty participating in the early workshop, resident evaluations were collected immediately before the workshop and for up to 18 months following the intervention, and for faculty participating late in the workshops, resident evaluations were collected up to 18 months before the intervention and only up to 6 months after the intervention.

Faculty Self-Assessment

We designed this questionnaire based on each of the 5 OMP microskills. Whenever possible, we used questions that had been developed previously.11,15 For each microskill, we developed 2 to 5 questions that related to tasks inherent in that microskill. Using a Likert scale, we asked faculty to rate their frequency and comfort in using each of the 5 microskills. First, faculty assessed how often they performed a skill (1 = “almost never,” 4 = “most of the time”). Second, faculty rated themselves on how comfortable they felt performing a skill (1 = “very uncomfortable,” 4 = “very comfortable”). Faculty completed the self-assessment questionnaire just before the faculty development workshop and 6 months later (Appendix 1).

Resident Assessment of Faculty

Resident questionnaires directly mirrored the faculty questionnaires, except attitude questions, which were asked as “satisfaction with teaching” instead of “comfort in performing a teaching behavior.” For frequency of preceptor behaviors, residents were asked the question “How often does your preceptor do the following …,” with responses from 1 to 4 (1 = “almost never [less than 10% of preceptor encounters]”; 4 = “most or all of the time [greater that 60% of preceptor encounters]”). Satisfaction questions were asked as follows: “How satisfied are you with this aspect of the teaching encounter…” (1 = “very unsatisfied”; 4 = “very satisfied”). We report only frequency data for the behavior questions, as these questions were identical on the faculty and resident questionnaires.

Testing of the Questionnaire

We developed 2 to 5 questions that we hoped directly measured the teaching behaviors of each of the 5 microskills of the OMP. Microskill 2 (“probe for supporting evidence”) inherently contains several potential teaching behaviors (e.g., probing the learner's reasoning in making a new diagnosis, prioritizing management in patients' with multiple chronic conditions, applying the medical literature in making diagnostic or therapeutic decisions). For this reason, microskill 2 started with 5 questions. Microskill 3 (“teach a general rule”) contained similar complexity, and we initially developed 4 questions to test this microskill.

To test the questionnaire's ability to discern between faculty trained in the microskills and those untrained in the microskills, we predicted that C faculty (who never received the intervention) would receive lower scores from residents on the microskills than G faculty (who had significant prior experience using the 5 microskills). Thus, we compared the results of resident evaluations of C and G faculty looking for substantial differences between the groups on all 5 microskills.

We then tested the questions related to each microskill for internal consistency or to what extent the questions worked together to measure the same construct (microskill). We used data from 444 resident evaluations of C, G, and P faculty, and calculated Cronbach's α, a measure of internal consistency, for our 5 microskills. Finally, in an attempt to optimize the efficiency of the questionnaire, we removed questions developed for microskills 2 and 3, the ones that we thought contained the most complexity, until we had the fewest remaining questions that maintained a good Cronbach's α for that microskill.

Assigning “Pre” and “Post” Status to Resident Evaluations

For P faculty, resident evaluations collected before the faculty member's participation were considered “pre” evaluations and evaluations collected after the faculty member's participation were considered “post” evaluations. As C and G faculty did not receive the intervention, they did not have designated pre-post resident evaluations. In order to assess trends in teaching behaviors for the C and G groups during the timeframe of the study, the first 2 resident assessment periods were considered “pre” evaluations and the second 2 assessment periods were considered “post” evaluations.

Statistical Analysis

We compared pre- and postintervention questionnaires from the intervention (P) faculty for self-assessed improvement in teaching behaviors using paired t tests. For resident questionnaires, we computed mean change scores for each microskill using only those questions related to that microskill and compared the P and C faculty groups using unpaired t tests. All analyses are reported at the .05 significance level using 2-tailed tests.

RESULTS

Participant Demographics

General internal medicine and medicine subspecialty faculty who precepted residents in continuity clinic were eligible to participate in the study. Demographic data for participating and control faculty are presented in Table 1.

Table 1
Participant (P) and Control (C) Demographics

Testing of the Questionnaire

In testing the ability of the questionnaire to discern between faculty trained in the OMP microskills (G faculty) and faculty untrained in the microskills (C faculty), residents judged G faculty performance as superior in 2 of the 5 microskills (Table 2). Cronbach's α testing for internal consistency showed moderate-to-good internal consistency (Table 2). For the microskills that initially contained more questions (microskills 2 and 3), removal of 3 and 2 questions (respectively) denigrated the α by no more than 0.03 at each step, and we retained the 2 questions that maintained a good α for each microskill. Appendix 1 also shows which questions remained in the final 11-question version of the questionnaire.

Table 2
Testing of Questionnaires: Comparison Between Control (C) and Investigator (G) Faculty and Cronbach's α Scores*

Faculty Self-Assessment

Twenty-three preintervention questionnaires (96%) and 23 postintervention questionnaires (96%) were returned. Using only the 22 paired faculty pre- and post-self-assessments, calculated change scores improved for all 5 microskills, and 3 of the microskills showed statistically significant improvements (“get a commitment,” “probe for supporting evidence,” and “give positive reinforcement,” Table 3).

Table 3
Participant Faculty Self-Assessment*

Resident Assessment of Faculty

Residents returned a total of 494 faculty evaluations. Only the 444 (90%) evaluations that did not have missing data were included in the analysis. These included 94 (of 107, 88%) preintervention and 58 (of 64, 91%) postintervention evaluations on intervention P faculty and 220 (of 239, 92%) evaluations on control C faculty. Residents reported that P faculty improved in 4 of 5 microskills following training, but the 2-tail t statistic did not reach statistical significance for any factor (Table 4). Residents' ratings of control C faculty declined slightly over the time period of the study (data not shown).

Table 4
Resident Assessment of Intervention “P” Faculty*

DISCUSSION

In this study, we designed a questionnaire to assess the major components of our OMP faculty development intervention, and used the questionnaire to obtain faculty self-assessment and resident assessment of faculty before and after the intervention. Our questionnaire showed good internal consistency and reliably distinguished trained and untrained faculty for 2 of the 5 microskills of the OMP. Although our faculty groups were small, limiting generalizability, our analysis supports the conclusion that our questionnaire was measuring teaching behaviors that residents could identify in their preceptors.

Although our intervention group was small, participants were from a university, a VA, and 2 community programs, representing a diverse group of outpatient preceptors. This questionnaire may be useful to others in assessing the results of faculty development programs using the 5 microskills of the OMP. In addition to including faculty self-assessment (rather than just faculty satisfaction), it extends the work of others by including resident evaluation of faculty teaching as the outcome of interest in faculty development.

Faculty who participated in our workshop felt that they increased their use of the OMP teaching skills over the next 6 months. Faculty perception of self-efficacy is critical to continued performance of newly learned skills. Participating faculty self-reported making the biggest improvements in the microskills “get a commitment” and “probe for supporting evidence.” These first 2 microskills are designed to help faculty assess the resident's understanding of the case. Making the transition from diagnosing the patient's problem to diagnosing the learner's educational gap is an important step in improving teaching skills. Our intervention appears to have improved our faculty's self-perceived abilities to diagnose their learners.

It was more difficult for us to show that residents perceived improved teaching by their faculty. Residents assessed their faculty in the outpatient environment and were not told which of their teaching faculty participated in the intervention. They rated their faculty quite high at baseline on all 5 microskills (3.05 of 4 being the lowest preintervention rating for P faculty), and reported nonsignificant increases in behavior after the intervention. It is possible that our faculty were already so skilled in these teaching techniques that they did not need improvement, but our results may represent residents' overall tendency to grade their faculty favorably, resulting in a possible ceiling effect. Further study using larger intervention groups across multiple institutions is needed.

Faculty development has been a critical component of medical education for many years, but assessment of faculty development interventions has rarely proceeded beyond assessing immediate postintervention participant satisfaction.16 Ideally, faculty development programs in teaching skills improve teachers' instructional skills and strategies resulting in residents' improved clinical skills and competence. Few programs have attempted to use residents to assess clinical teachers' skill improvement following faculty development workshop participation, and we note here some of the challenges we encountered in attempting this “outcomes” assessment. First, even though faculty may truly learn something from the program, it is difficult to measure a specific skill change in the complex clinical teaching environment. Faculty may only partially succeed in incorporating new skills taught during faculty development programs. Because faculty are habituated to a particular teaching practice, they may make early changes after what they consider a successful faculty development intervention, and then fall back into previous patterns of behavior if the new skills are not reinforced. Residents evaluating them at any 1 point in time are probably giving a “gestalt” evaluation, and may not remember a recent improvement in a certain behavior. Second, teachers' improved instructional competence may be difficult to measure. Learners are rarely trained to observe teaching skills, and although faculty may have successfully incorporated new skills learned during faculty development programs, their learners may not recognize differences in their teaching.17 Third, faculty in ambulatory settings have many competing simultaneous demands while teaching,18 including assuring optimal patient care, meeting the needs of multiple residents, and supporting the residents' development as independent practitioners. Even when capable, teaching with consistency so that residents can assess the predominant instructional behaviors is challenging. These and other observational reasons may have contributed to our negative results.

In addition to these complexities, feasibility issues compound the problem of outcomes assessment in educational research. We, like many medical educators, had minimal funding for this project and had to make concessions (such as not collecting self-assessments on control faculty) in study design to make the project feasible. We overcame some of our feasibility issues by involving our ambulatory Quality Improvement Team in the project. They understood that improving faculty teaching skills would improve patient care, and provided scannable surveys and completed all the data entry for us free of charge. Also, due to precepting needs in our clinics, we found it impossible to randomize faculty, although we achieved good participation rates by providing precepting coverage whenever possible for faculty who attended the workshops. Lastly, shared working and teaching environments make contamination inevitable. Learning environments in medical education are complex and dynamic, and new teaching skills cannot be tested without contextual bias. When taken together, all of these considerations make the idea of outcomes assessment a daunting task for medical educators committed to faculty development.

There are several limitations to this study. First, we did not use a randomized design, and faculty who self-selected into the intervention group may have naturally had greater propensity to maintain and improve their teaching skills. We did provide coverage for all faculty to attend the conference, so they may have had some incentive to participate in the workshops. Second, we collected preintervention faculty self-assessment data at the beginning of each workshop. As the study was not randomized, we did not know who our control group was until after all the workshops were completed, and had no easy way to collect self-assessment data on this group. This limits the usefulness of our faculty self-assessment data. Third, we do not know how many of our study subjects had prior training in the OMP or other ambulatory teaching skills. Because the workshop was available to all faculty preceptors in resident continuity clinics, some of our study subjects were generalists and some subspecialists, so there may have been variability in the amount of prior training that we did not capture. However, only 7% of resident questionnaires assessed subspecialty faculty. Fourth, self-assessment data of volunteer participants have inherent bias, as participants might rate their skills more favorably in follow-up to validate the time spent in the faculty development program. Delaying collection of our follow-up data for 6 months may have minimized this bias. Fifth, residents were untrained observers of faculty teaching skills and may have not had the necessary observational skills to accurately assess them. Future work should include training of learners to improve their assessment of teaching skills. Sixth, although we believe that residents were unaware whether faculty they evaluated had participated in the workshop, we cannot be completely certain they were blinded to faculty participation status when they evaluated them. Also, resident assessments were anonymous, and it was not feasible for us to track resident completion of their assessments, so we do not know the percentage response rate for resident surveys.

We developed an instrument that appears to distinguish ambulatory preceptors trained in the OMP microskills from faculty who are untrained in these skills, and tested it in a mixed university-community sample. Although our intervention failed to significantly improve teaching skills in our faculty as rated by residents, we believe that this study is an important step in extending assessment of faculty development to include resident evaluation of participating faculty. The instruments and techniques that we developed should be tested further using broad faculty development audiences in diverse settings.

Acknowledgments

Financial support: This work was supported in part by funding from HRSA contract 240-97-0044 “Faculty Development for General Internal Medicine: Generalist Faculty Teaching in Community-based Ambulatory Settings” made to the Association of Professors of Medicine.

Supplementary Material

The following supplementary material is available for this article online at www.blackwell-synergy.com

Appendix 1

Faculty Self-Assessment Survey

REFERENCES

1. Wilkerson L, Irby DM. Strategies for improving teaching practicesa: comprehensive approach to faculty development. Acad Med. 1998;73:387–96. [PubMed]
2. Wilkerson L, Armstrong E, Lesky L. Faculty development for ambulatory teaching. J Gen Intern Med. 1990;5:S44–S53. [PubMed]
3. Salerno SM, Jackson JL, O'Malley PG. Interactive faculty development seminars improve the quality of written feedback in ambulatory teaching. J Gen Intern Med. 2003;18:831–34. [PMC free article] [PubMed]
4. Green ML, Gross CP, Kernan WN, Wong JG, Holmboe ES. Integrating teaching skills and clinical content in a faculty development workshop. J Gen Intern Med. 2003;18:468–74. [PMC free article] [PubMed]
5. Hewson MG, Copeland HL, Fishleder AJ. What's the use of faculty development? Program evaluation using retrospective self-assessments and independent performance ratings. Teach Learn Med. 2001;13:153–60. [PubMed]
6. Litzelman DK, Westmoreland GR, Skeff KM, Stratos GA. Student and resident evaluations of faculty—how dependable are they? Acad Med. 1999;74:S25–S7. [PubMed]
7. Neher JO, Gordon KC, Meyer B, Stevens N. A five-step “microskills” model of clinical teaching. J Am Board Fam Pract. 1992;5:419–24. [PubMed]
8. Irby DM, Aagaard E, Teherani A. Teaching points identified by preceptors observing one-minute preceptor and traditional preceptor encounters. Acad Med. 2004;79:50–55. [PubMed]
9. Aagaard E, Teherani A, Irby DM. Effectiveness of the one-minute preceptor model for diagnosing the patient and the learner: proof of concept. Acad Med. 2004;79:42–9. [PubMed]
10. Salerno SM, O'Malley PG, Pangaro LN, Wheeler GA, Moores LK, Jackson JL. Faculty development seminars based on the one-minute preceptor improve feedback in the ambulatory setting. J Gen Intern Med. 2002;17:779–87. [PMC free article] [PubMed]
11. Furney SL, Orsini AN, Orsetti KE, Stern DT, Gruppen LD, Irby DM. Teaching the one-minute preceptor. A randomized controlled trial. J Gen Intern Med. 2001;16:620–4. [PMC free article] [PubMed]
12. Bowen JL, Eckstrom E, Muller M, Haney E. Enhancing the effectiveness of one-minute preceptor faculty development workshops. Teach Learn Med. 2006;18:35–41. [PubMed]
13. Gelula MH, Yudkowsky R. Using standardized students in faculty development workshops to improve clinical teaching skills. Med Educ. 2003;37:621–9. [PubMed]
14. Lesky LG, Wilkerson L. Using standardized students to teach a learner-centered approach to ambulatory precepting. Acad Med. 1994;69:955–7. [PubMed]
15. Litzelman DK, Stratos GA, Marriott DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73:688–95. [PubMed]
16. Reid A, Stritter FT, Arndt JE. Assessment of faculty development program outcomes. Family Med. 1997;29:242–7. [PubMed]
17. Skeff KM. Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75:465–70. [PubMed]
18. Irby DM. Teaching and learning in ambulatory care settings: a thematic review of the literature. Acad Med. 1995;70:898–931. [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine