|Home | About | Journals | Submit | Contact Us | Français|
As medical schools turn to community physicians for ambulatory care teaching, assessing the preparation of these faculty in principles of evidence-based medicine (EBM) becomes important.
To determine the knowledge and attitudes of community faculty concerning EBM and their use of EBM in patient care and teaching.
Cross-sectional survey conducted from January to March of 2000.
A clinical campus of a state medical school; a midwestern city of a half-million people with demographics close to national means.
Comparisons of community faculty with full-time faculty in perceived importance and understanding of EBM (5-point scale), knowledge of EBM, and use of EBM in patient care and teaching.
Responses were obtained from 63% (177) of eligible community faculty and 71% (22) of full-time faculty. Community faculty considered EBM skills to be less important for daily practice than did full-time faculty (3.1 vs 4.0; P < .01). Primary care community faculty were less confident of their EBM knowledge than were subspecialty community or full-time faculty (2.9 vs 3.3 vs 3.6; P < .01). Objective measures of EBM knowledge showed primary care and subspecialty community faculty about equal and significantly below full-time faculty (P < .01). Thirty-three percent of community faculty versus 5% of full-time faculty do not incorporate EBM principles into their teaching (P < .01).
Community faculty are not as equipped or motivated to incorporate EBM into their clinical teaching as are full-time faculty. Faculty development programs for community faculty should feature how to use and teach basic EBM concepts.
Many authors have offered suggestions on creating faculty development programs for community-based faculty.1–3 The Icicle Creek Conference on Ambulatory Care Education identified “preceptor characteristics” and “faculty development” as its second and third highest research priorities concerning education in the ambulatory setting.4 However, many studies of community-based preceptors have focused on precepting mechanics5 (i.e., “microskills”) and feedback.6 Studies addressing the skills and knowledge of community-based preceptors in applying the principles of evidence-based medicine (EBM) to their teaching have been lacking.
Although we can teach medical students in the classroom setting the knowledge base and processes used in the “evidence-based” practice of medicine, if they do not see it modeled by the faculty, including the volunteer community-based faculty, they will not value it. We performed a medline database search back to 1979 in order to locate articles about community-based faculty and their faculty development needs for EBM. We found no articles of this nature in the database.
A needs assessment of this kind is critical to faculty development for community-based volunteer faculty. We devised a study aimed at answering these questions: 1) What are the prevailing attitudes of volunteer community-based faculty toward EBM concepts? 2) Does their knowledge and skill level with EBM concepts allow the volunteer community-based faculty to a) apply them in daily practice and b) teach them routinely to medical students and residents? 3) What are the attitudes of volunteer community-based faculty regarding EBM concepts compared to those of their full-time faculty colleagues?
This study was a cross-sectional survey study of our full-time and volunteer community-based faculty utilizing a confidential mailed questionnaire.
University of Kansas School of Medicine–Wichita (UKSM-W) is a unique academic-community consortium whose educational mission is heavily supported by volunteer community-based physicians. Within the Departments of Family Medicine, Internal Medicine, and Pediatrics, 283 volunteer community-based physicians work in conjunction with 31 full-time academic physicians to provide the total clinical experience for 100 third- and fourth-year students and 130 residents. For instance, in the Department of Internal Medicine, volunteer community-based physicians account for nearly 24 of the 36 months of supervisory attending work with residents within the department. Because of these factors, we believed that the community-based faculty affiliated with UKSM-W would support a study with appropriate validity, generalizability, and power.
We obtained mailing lists of all full-time and community-based volunteer teaching faculty affiliated with the Departments of Internal Medicine, Pediatrics, and Family Medicine. Faculty who precepted students and/or residents for less than 6 weeks over the preceding 2 years were excluded.
We designed a questionnaire with 4 main sections. The first section asked respondents to describe their personal and professional characteristics, including age, the number of years in practice, medical specialty and subspecialty, any extra training in statistics and epidemiology, research experience, and the number of hours per week spent in different clinical arenas. The next section asked respondents to rate the importance of different EBM concepts and terms (e.g., “knowing and using the sensitivity and specificity of a test”) in their daily practice, employing a 5-point scale ranging from “Not Important” to “Extremely Important.” It also asked respondents to rate their understanding of a list of EBM concepts and terms using a 5-point scale from “Not well” to “Extremely well.” The third section explored the respondents' incorporation of EBM into their continuing medical education (CME) and teaching activities. The final section of the questionnaire was composed of 7 test questions used to evaluate and stratify the baseline knowledge of EBM content areas. This EBM test was developed and used for 2 years to evaluate the performance of medical students in an EBM course at our medical school. Basic item analyses (face validity, pilot-testing, and discrimination indices) established the ability of the test to discriminate knowledge of the EBM concepts that we expect students to learn in the course.
The questionnaire was pilot tested on a group of 7 Family Practice faculty affiliated with our institution, but located in a different city. The questionnaire was modified based on feedback we received from these faculty members.
The survey was administered from January through March of 2000. A letter of support accompanied the questionnaire from the Dean of UKSM-W. Nonrespondents received second and third requests with duplicate surveys at 4-week intervals. The survey was marked confidential, and respondents were tracked by numerical codes. The UKSM-W Institutional Review Board approved the study.
We used SPSS (Statistical Product and Service Solutions) 7.5 Base for Windows 95 (SPSS, Inc., Chicago, Ill) for statistical analysis. All data entries were double checked for errors. Frequencies of all responses were reviewed for outliers and non-normality. Variables with sparse responses were recoded. Continuous variables were described using distributions, means, medians, standard deviations, and ranges. Skewness and kurtosis statistics were reviewed to determine the normality of continuous variables. When reasonable, continuous variables were recoded into low, medium, and high categories to facilitate the presentation of bivariate and multivariate analyses.
To avoid problems with multiple comparisons, we used factor analysis to collapse the attitudinal scales measuring the understanding and importance in daily practice of EBM terms and concepts into 4 factors. Cronbach's α was used to quantify the internal reliability of each of the 4 factors and to assess the contribution of each question to the overall reliability of each factor. Standard validity procedures were used to ensure the validity and reliability of the factors. In developing the 4 factors, each question making up a factor was required to: 1) have high factor loadings in the factor analysis; 2) have face validity; 3) have, as a factor, a high Cronbach's α; and, 4) have no individual question that could decrease the Cronbach's α of the factor.
Both authors graded the test questions independently. We calculated a κ score for our agreement, and met later to resolve any differences. The total test score was calculated for those respondents completing 4 or more out of the 7 questions, and assuming that questions unanswered were incorrect. Because we were trying to describe the knowledge and attitudes of full-time and volunteer community-based preceptors regarding EBM, our main outcome measures were the 4 factors on the understanding and importance of EBM concepts and terms and the total score on the EBM test.
X2 tests were used to compare categorical variables. Student's t tests, analysis of variance testing, and correlation coefficients were used for continuous variables.
We used multivariate analyses to identify variables that were independently associated with the EBM test score. We considered all the variables used in the bivariate analyses as potential candidates. To guard against multiple co-linearity, all possibly related independent variables were examined in a series of bivariate analyses. A series of forward and backward elimination was used for multiple linear regression modeling of the relationship of independent variables to the total score on the EBM test. Conventional regression diagnostics were employed to identify potential influential data points and colinearity.
All statistics were performed utilizing a 2-sided test and a significance level of .05. Assuming a 60% response rate from both volunteer and full-time faculty members, we would have 80% power to detect a 14% difference (P < .05) in scores on the EBM knowledge test between these two groups.
One hundred seventy-seven (63%) volunteer community-based faculty and 22 (71%, P = .47) full-time faculty responded to the questionnaire. Twenty-four percent of respondents were female. Specialty representation included: 93 (47%) in Family Practice, 16 (8%) in General Internal Medicine, 45 (22%) in an Internal Medicine subspecialty, 14 (7%) in General Pediatrics, 8 (4%) in a pediatric subspecialty, and 23 (12%) from another discipline (i.e., Emergency Medicine or orthopedic specialists who teach learners from the former 3 departments). Nonrespondents were not significantly different from respondents in gender or departmental representation. The mean age of the respondents was 45.6 years old, and the mean number of years since residency was 15.2.
Table 1 presents the respondents divided into 3 groups: volunteer primary care (PC) faculty, volunteer subspecialty (SS) faculty, and full-time (FT) faculty, both primary care and subspecialists. There were fewer women represented among the volunteer subspecialty faculty. Most (88%) volunteer primary care faculty had minimal or no research background, and only 20% had been an author on a peer-reviewed paper. Very few (11%) volunteer primary care faculty indicated they had received additional training in statistics or epidemiology.
Respondents were asked to indicate the top 3 methods by which they stayed current and up to date. “Go to meetings,”“read journals weekly,” and “read about my patient” were the most frequently indicated responses. Although the majority of the volunteer faculty had not made EBM a focus for CME, 26% of the volunteer primary care faculty had attended a conference about EBM, 21% had read about EBM concepts, and 16% had read EBM reviews (such as Journal of Family Practice's“Patient Oriented Evidence that Matters” or American College of Physicians Journal Club).
Among volunteer primary-care faculty, using a textbook was the most popular method (99%) to get answers to patient care questions; 88% ask a colleague, and 68% ask for a more formal consult. Only 26% indicated they perform their own medline searches. Sixty-four percent of the volunteer primary-care faculty read no more than 0 to 3 original clinical studies relevant to their practice each month.
The majority of the faculty reported that they encourage medical learners to formulate patient care questions and encourage them to perform literature searches. However, less than half of the volunteer faculty discuss the sensitivity and specificity of diagnostic tests and less than 20% of the volunteer faculty discuss the relative risks and numbers needed to treat of therapies with the medical learners; the responses of the full-time faculty were only slightly better.
We gave the faculty a series of multiple-choice and short-answer questions about core EBM concepts—test questions shown to have discriminant validity in our undergraduate medical school course. The faculty performed well on the concepts of a test's sensitivity and a treatment's relative risk. Faculty's understanding of confidence intervals, of medline searching for diagnostic test studies, and of likelihood ratios was poor.
Only 32% of the respondents attempted to complete the final 2 questions, which asked respondents to list validity criteria for studies about diagnostic tests and therapy. The two authors graded the short-answer responses independently, using the criteria set forth in the Users' Guides to the Medical Literature developed by the Evidence-Based Medicine Working Group.7,8 The κ scores indicated modest agreement (diagnostic test study responses, κ = 0.46; therapy study responses, κ = 0.68) between the authors. For these short-answer questions about validity criteria, the faculty correctly answered 1.4 out of 4 possible diagnostic test study criteria, and 1.8 out of 5 possible therapy study criteria.
The results suggested that some respondents gave up on the test without any attempt (7%) or after answering only a few questions (2%). For these respondents, we felt that their incomplete tests may not be an accurate gauge of their EBM knowledge base. In order to give respondents the benefit of the doubt, we considered a test “completed” when respondents attempted to answer 4 or more of the 7 questions (n = 181), and assumed for the score that any unanswered questions were incorrect. Using this methodology, we obtained a mean total test score of 34% correct.
We asked the respondents to rate their understanding and the importance in their daily practice of EBM concepts and terms using 5-point scales (Table 2). The faculty believed they understood well “sensitivity and specificity” and “prevalence” (both 4.0 ± 1.0). Terms they believed they did not understand well (<3.0) included likelihood ratios (2.8 ± 1.2) and odds ratios (2.8 ± 1.3). Faculty considered all the EBM concepts listed to be of at least some importance (>3.0) in their daily practice. Of these, “understanding the relative risks of treatments” (4.0 ± 0.9) received the highest importance. “Knowing a medication's number needed to treat” and “using positive/negative likelihood ratios” received the rating of the lowest importance (both 3.1 ± 1.2).
For further analyses, we hoped to decrease the problem of multiple comparisons on these 19 items. A factor analysis of these items indicated 4 potential factors that seemed to have face validity. We calculated the mean and standard deviations of the variables making up the factors, and found that the factors had very good internal consistency reliability by Cronbach's α testing (0.80 to 0.91).
Because we needed a reference standard with which we might measure the faculty development needs of our volunteer primary care and subspecialty faculty, we compared them to our full-time faculty (Table 3). Volunteer primary care faculty scored significantly lower on 2 of the attitudinal scales than did the full-time faculty (Factor 2: Understanding of statistics used in articles [2.9 ± 1.0 vs 3.6 ± 1.2; P < .01], and Factor 3: Importance of core EBM skills in daily practice [2.9 ± 0.9 vs 4.0 ± 0.7; P < .01]). They also scored lower on the EBM Test Score (0.30 ± 0.17 vs 0.55 ± 0.27; P < .01).
The volunteer subspecialist faculty attitudinal scores were more similar to the full-time faculty scores, except for Factor 3: Importance of core EBM skills in daily practice (3.4 vs 4.0; P < .01). However, their EBM test score was significantly lower (0.34 vs 0.55; P < .01) than the full-time faculty score, and not significantly different from the volunteer primary care faculty score.
We hoped to identify independent characteristics within our volunteer community-based faculty that would be predictive of more EBM knowledge. Therefore, we used a linear multivariate regression analysis (Table 4). The variable “full-time faculty” was kept in the analysis in order to compare the strength of each characteristic to a reference standard. Multivariate analysis demonstrated 5 other characteristics, besides being a full-time member of the faculty, that were independent predictors of the EBM test score. Positive predictors included: having a more extensive research background (none, minimal, moderate, a lot), having a higher score on Factor 1 (understanding of study types/sensitivity and specificity), specializing in Family Practice, and encouraging learners to perform literature searches. The number of years since residency was a negative predictor. This model accounted for 39% of the variance in the EBM test scores of respondents.
This paper is the first to document knowledge of and attitudes toward EBM in a large group of community-based volunteer faculty from the departments of Internal Medicine, Family Medicine, and Pediatrics. The study compares their characteristics to those of the full-time faculty members, utilizing a novel approach. We used scales to assess how important EBM was to their everyday practice and to self-evaluate their own understanding of EBM terminology. We also used a short test to assess their knowledge about EBM. Our findings are instructive.
Fifty-two percent of primary care volunteer community-based faculty had essentially no background in research, more than a 5-fold difference compared to their full-time and volunteer community-based subspecialty counterparts. Fewer PC faculty (21%) admitted reading about EBM concepts, fewer still (16%) read EBM reviews (as in the ACP Journal Club or JFP POEMS), and they read fewer original clinical studies per month than FT or SS faculty. Respondents' reliance on textbooks and colleagues to answer patient care questions echoes a prior study of Canadian internists.9 PC faculty say they do not incorporate EBM into their teaching on a regular basis, have a lower confidence in their own understanding of statistics, and assigned a lower importance to core EBM skills compared to FT or SS faculty. Despite these attitudinal differences, there was no significant difference in the knowledge scores of PC and SS respondents, and both were significantly lower than those of the FT faculty. After we accounted for other characteristics in the multivariate analyses, specializing in Family Practice—a large proportion of the PC faculty—contributed significantly to the overall EBM score.
Having a research background was the strongest independent predictor of the EBM score, other than being a full-time faculty member. A higher self-evaluation of EBM understanding was also highly predictive of success on the test. On the whole, our regression model predicted 39% of the variability in the EBM score. Variables not accounted for by our model would include a lack of incentive to perform well on the knowledge test, differences in how the respondents received training in EBM, and differences in the test-taking abilities of the respondents.
Three quarters of medical school graduates in 1999 believed their instruction during medical school in EBM was “appropriate” in quantity.10 Green surveyed Internal Medicine program directors in 1998 to characterize EBM curricula in residency programs. About one third of the programs had a free-standing EBM curriculum, and most tried to integrate EBM instruction into a variety of learning situations, including attending rounds (84%) and resident report (82%).11 Studies at individual institutions have shown that these EBM curricular efforts have increased learners' confidence in their critical appraisal skills,12,13 and improved their cognitive and technical skills in EBM.14 Controlled trials using an EBM curriculum as an intervention have shown an impact in learners' ability to appraise articles,15 but have not resolved whether behavior at the point of care can be influenced.16
As EBM moves to the forefront of medical education, it is extolled as a new way to teach medicine.17 However, residents' behavior in providing care likely will mirror the patterns modeled by their mentors. While our full-time faculty may be getting the EBM message, the enlistment of community-based preceptors has raised new questions. Can they teach effectively? What should they be teaching? How and when should their faculty development occur? Family Medicine residencies have utilized voluntary community physicians since the inception of their programs thirty years ago; General Internal Medicine and Pediatrics have done so only more recently.
The Preceptor Education Project of the Society of Teachers of Family Medicine is a project started in the early 1990s that is devoted to increasing the teaching skills of office-based Family Practice preceptors.18 The preceptor manual and workshop materials, widely distributed through most of the 120 Family Practice residencies in this country, discuss how “evidence-based clinical practice” is a particularly productive application of collaborative learning. Concomitantly, there has been a dramatic increase in the discussion of EBM principles in the most prominent peer-reviewed Family Practice journals. Based on a medline search of the 6 leading Family Practice journals (Family Medicine, Journal of Family Practice, Journal of the American Board of Family Practice, American Family Physician, Archives of Family Medicine, and Family Practice Management) using only the keywords “evidence-based medicine,” there was almost no mention of EBM in 1995, compared to 4.1% of all abstracts mentioning it in 2000 (a 6-year average of 2.2%). This compares very favorably with the frequency of such discussions in prestigious general medical journals (JAMANEJM, 0.5%) using the same keywords in the same time period. It seems that EBM is beginning to soak into the fabric of family medicine. This major change in emphasis may explain why Family Practice as a specialty was an independent predictor of success on the EBM test in our study.
Not surprisingly, physicians who were farther in years from their training did poorer on the EBM test and were less likely to incorporate EBM into their teaching. Previous studies have shown that the knowledge base of physicians decreases with time, and can be responsive to interventions.19 Particularly important is the fact that the preponderance of EBM knowledge dissemination has occurred since 1990, and the data help identify a high-impact group for faculty development in EBM. In Green's study, only about 45% of the Internal Medicine residency programs provided any faculty development in this area.12
This study has some limitations. First, it took place in only 1 city. It may be that “Family Practice” as a characteristic would not be predictive of EBM knowledge in studies at other institutions. Secondly, this was a questionnaire study and is subject to response bias, as well as to the respondents' ability for self-evaluation. Fortunately, we were able to diminish response bias by achieving a very reasonable response rate (64%). Our knowledge test was able to verify any inflation of the respondents' self-reported understanding of EBM content areas. We suspect, based on the disparity, that this may have been the case among the volunteer subspecialty faculty, who reported a score on EBM understanding as statistically high as the full-time faculty, but who scored lower than the full-time faculty on the knowledge test. Also, there are EBM skills that we did not assess in our study, such as articulating answerable questions, searching for evidence, and applying the evidence in decision making. Finally, not all respondents completed the entire EBM test (91% completed 4 or more of the 7 questions) and far fewer attempted to complete the final 2 short-answer questions. However, given that the respondents are mainly busy volunteer faculty, we were impressed by the number who attempted it at all.
In conclusion, we found that community-based volunteer faculty are not as equipped or motivated to incorporate EBM into their clinical teaching as are full-time faculty. We identified which EBM concepts were not disseminated well into the collective knowledge base of community faculty. We found a few characteristics that were independently associated with having a higher knowledge of EBM concepts, including at least some research background, specializing in Family Practice, and the number of years since residency (a negative predictor). As a whole, this needs assessment should be helpful for faculty development planners who create curricula for their community volunteer faculty in using and teaching basic EBM concepts.
The authors would like to thank Dr. Garold Minns and Dr. Rick Kellerman for their support and encouragement. This project was partially funded by a grant from the Wesley Medical Foundation in Wichita, KS.
|1. A certain cohort study showed that patients in group 1 had a crude Relative Risk of death of 1.5 compared to patients in group 2. How do you interpret these results? (circle one)|
|a. Patients in group 1 had a 50% increase in mortality compared to group 2.|
|b. Patients in group 1 had 150 times more deaths than group 2.|
|c. Those who died had 1.5 times the risk of being in group 1 as those who lived.|
|d. I don't know/not sure.|
|2. A certain study showed a Relative Risk of 2.5 when the experimental group was exposed. The confidence interval was (0.79-5.21). How do you interpret these results? (circle one)|
|a. Since the Relative Risk was right in the middle of the confidence interval, these results can give strong inference to the general population.|
|b. The Relative Risk in the study had a range of values from 0.79 to 5.21.|
|c. The confidence interval included 1.0, which means the relative risk in the study population is not statistically significant.|
|d. I don't know/not sure.|
|3. A certain test has a positive likelihood ratio of 3. What does this tell you about the test's ability to change your pretest probability to post-test probability? (circle one)|
|a. The test will make very large changes in the probability of disease if positive.|
|b. Those with the disease have 3 times the chance of having a positive test compared to those without the disease.|
|c. The test will make no change in the probability of disease if positive.|
|d. Those with a positive test have 3 times the chance of having the disease as those with a negative test.|
|e. I don't know/not sure|
|4. A study showed that a certain test had a sensitivity of 0.93. Which of following statements is correct? (circle one)|
|a. The test found 93% of those who had the disease.|
|b. In the study, 93% of those without the disease had a positive test.|
|c. In the study, 7% of those who had a positive test had the disease.|
|d. I don't know/not sure|
|5. Identify a Medical Subject Heading (MeSH) term that would NOT be useful in searching the Medline databases for a study about the sensitivity and specificity of a certain disease.|
|a. Sensitivity and Specificity.|
|b. Randomized Controlled Trial.|
|c. Predictive Value of Tests.|
|d. Cross-Sectional Study|
|e. I don't know/not sure|
|6. Please list 4 criteria that you would use to assess the validity of the methodology used in a study to determine the sensitivity and specificity of a diagnostic test. You may use any handy lists that you would normally use to evaluate an article.|
|7. Please list 5 criteria that you would use to assess the validity of the methodology used in a study to determine the effectiveness of a new therapy or medication. You may use any handy lists that you would normally use to evaluate an article.|