In this study, we designed a questionnaire to assess the major components of our OMP faculty development intervention, and used the questionnaire to obtain faculty self-assessment and resident assessment of faculty before and after the intervention. Our questionnaire showed good internal consistency and reliably distinguished trained and untrained faculty for 2 of the 5 microskills of the OMP. Although our faculty groups were small, limiting generalizability, our analysis supports the conclusion that our questionnaire was measuring teaching behaviors that residents could identify in their preceptors.
Although our intervention group was small, participants were from a university, a VA, and 2 community programs, representing a diverse group of outpatient preceptors. This questionnaire may be useful to others in assessing the results of faculty development programs using the 5 microskills of the OMP. In addition to including faculty self-assessment (rather than just faculty satisfaction), it extends the work of others by including resident evaluation of faculty teaching as the outcome of interest in faculty development.
Faculty who participated in our workshop felt that they increased their use of the OMP teaching skills over the next 6 months. Faculty perception of self-efficacy is critical to continued performance of newly learned skills. Participating faculty self-reported making the biggest improvements in the microskills “get a commitment” and “probe for supporting evidence.” These first 2 microskills are designed to help faculty assess the resident's understanding of the case. Making the transition from diagnosing the patient's problem to diagnosing the learner's educational gap is an important step in improving teaching skills. Our intervention appears to have improved our faculty's self-perceived abilities to diagnose their learners.
It was more difficult for us to show that residents perceived improved teaching by their faculty. Residents assessed their faculty in the outpatient environment and were not told which of their teaching faculty participated in the intervention. They rated their faculty quite high at baseline on all 5 microskills (3.05 of 4 being the lowest preintervention rating for P faculty), and reported nonsignificant increases in behavior after the intervention. It is possible that our faculty were already so skilled in these teaching techniques that they did not need improvement, but our results may represent residents' overall tendency to grade their faculty favorably, resulting in a possible ceiling effect. Further study using larger intervention groups across multiple institutions is needed.
Faculty development has been a critical component of medical education for many years, but assessment of faculty development interventions has rarely proceeded beyond assessing immediate postintervention participant satisfaction.16
Ideally, faculty development programs in teaching skills improve teachers' instructional skills and strategies resulting in residents' improved clinical skills and competence. Few programs have attempted to use residents to assess clinical teachers' skill improvement following faculty development workshop participation, and we note here some of the challenges we encountered in attempting this “outcomes” assessment. First, even though faculty may truly learn something from the program, it is difficult to measure a specific skill change in the complex clinical teaching environment. Faculty may only partially succeed in incorporating new skills taught during faculty development programs. Because faculty are habituated to a particular teaching practice, they may make early changes after what they consider a successful faculty development intervention, and then fall back into previous patterns of behavior if the new skills are not reinforced. Residents evaluating them at any 1 point in time are probably giving a “gestalt” evaluation, and may not remember a recent improvement in a certain behavior. Second, teachers' improved instructional competence may be difficult to measure. Learners are rarely trained to observe teaching skills, and although faculty may have successfully incorporated new skills learned during faculty development programs, their learners may not recognize differences in their teaching.17
Third, faculty in ambulatory settings have many competing simultaneous demands while teaching,18
including assuring optimal patient care, meeting the needs of multiple residents, and supporting the residents' development as independent practitioners. Even when capable, teaching with consistency so that residents can assess the predominant instructional behaviors is challenging. These and other observational reasons may have contributed to our negative results.
In addition to these complexities, feasibility issues compound the problem of outcomes assessment in educational research. We, like many medical educators, had minimal funding for this project and had to make concessions (such as not collecting self-assessments on control faculty) in study design to make the project feasible. We overcame some of our feasibility issues by involving our ambulatory Quality Improvement Team in the project. They understood that improving faculty teaching skills would improve patient care, and provided scannable surveys and completed all the data entry for us free of charge. Also, due to precepting needs in our clinics, we found it impossible to randomize faculty, although we achieved good participation rates by providing precepting coverage whenever possible for faculty who attended the workshops. Lastly, shared working and teaching environments make contamination inevitable. Learning environments in medical education are complex and dynamic, and new teaching skills cannot be tested without contextual bias. When taken together, all of these considerations make the idea of outcomes assessment a daunting task for medical educators committed to faculty development.
There are several limitations to this study. First, we did not use a randomized design, and faculty who self-selected into the intervention group may have naturally had greater propensity to maintain and improve their teaching skills. We did provide coverage for all faculty to attend the conference, so they may have had some incentive to participate in the workshops. Second, we collected preintervention faculty self-assessment data at the beginning of each workshop. As the study was not randomized, we did not know who our control group was until after all the workshops were completed, and had no easy way to collect self-assessment data on this group. This limits the usefulness of our faculty self-assessment data. Third, we do not know how many of our study subjects had prior training in the OMP or other ambulatory teaching skills. Because the workshop was available to all faculty preceptors in resident continuity clinics, some of our study subjects were generalists and some subspecialists, so there may have been variability in the amount of prior training that we did not capture. However, only 7% of resident questionnaires assessed subspecialty faculty. Fourth, self-assessment data of volunteer participants have inherent bias, as participants might rate their skills more favorably in follow-up to validate the time spent in the faculty development program. Delaying collection of our follow-up data for 6 months may have minimized this bias. Fifth, residents were untrained observers of faculty teaching skills and may have not had the necessary observational skills to accurately assess them. Future work should include training of learners to improve their assessment of teaching skills. Sixth, although we believe that residents were unaware whether faculty they evaluated had participated in the workshop, we cannot be completely certain they were blinded to faculty participation status when they evaluated them. Also, resident assessments were anonymous, and it was not feasible for us to track resident completion of their assessments, so we do not know the percentage response rate for resident surveys.
We developed an instrument that appears to distinguish ambulatory preceptors trained in the OMP microskills from faculty who are untrained in these skills, and tested it in a mixed university-community sample. Although our intervention failed to significantly improve teaching skills in our faculty as rated by residents, we believe that this study is an important step in extending assessment of faculty development to include resident evaluation of participating faculty. The instruments and techniques that we developed should be tested further using broad faculty development audiences in diverse settings.