Search tips
Search criteria 


Logo of bmjThis ArticleThe BMJ
BMJ. 2004 October 30; 329(7473): 1029–1032.
PMCID: PMC524561

Evaluating the teaching of evidence based medicine: conceptual framework

Sharon E Straus, associate professor,1 Michael L Green, associate professor,2 Douglas S Bell, assistant professor,3 Robert Badgett, associate professor,4 Dave Davis, professor,5 Martha Gerrity, associate professor,6 Eduardo Ortiz, associate chief of staff,7 Terrence M Shaneyfelt, assistant professor,8 Chad Whelan, assistant professor,9 Rajesh Mangrulkar, assistant professor,10 and the Society of General Internal Medicine Evidence-Based Medicine Task Force

Short abstract

Although evidence for the effectiveness of evidence based medicine has accumulated, there is still little evidence on what are the most effective methods of teaching it.

Interest in evidence based medicine (EBM) has grown exponentially, and professional organisations and training programmes have shifted their agenda from whether to teach EBM to how to teach it. However, there is little evidence about the effectiveness of different methods,1 and this may be related to the lack of a conceptual framework within which to structure evaluation strategies. In this article we propose a potential framework for evaluating methods of teaching EBM. Showing the effectiveness of such teaching methods relies both on psychometrically strong measurements and methodologically rigorous and appropriate study designs, and our framework addresses the former.

This effort was initiated by the Society of General Internal Medicine Evidence-Based Medicine Task Force.2 In an attempt to tackle the challenges in designing and evaluating a series of teaching workshops on EBM for busy practising clinicians, the task force created a conceptual framework for evaluating teaching methods. This was done by a working group of clinicians interested in the subject. They completed a literature review of instruments used for evaluating teaching of EBM (manuscript in preparation), and two members of the task force used the information to draft a conceptual framework. This framework and relevant background materials were discussed and revised at a consensus conference including 10 physicians interested in EBM, evaluation of education methods, or programme development. We then sent a revised framework to all members of the task force and six other international colleagues interested in the subject. We incorporated their suggestions into the framework presented in this article.

When formulating clinical questions, advocates of EBM suggest using the “PICO” approach—defining the patient, intervention, comparison intervention, and outcome.3 We used this approach to provide a framework for the evaluation matrix, specifically:

  • Who is the learner?
  • What is the intervention?
  • What is the outcome?

The answers to these three questions form the structure of our conceptual model.

Who is the learner?

Learners can be doctors, patients, policy makers, or managers. This article focuses on doctors, but our evaluation framework could be applied to other audiences.audiences.

Figure 1

An external file that holds a picture, illustration, etc.
Object name is strs176545.f1.jpg


Not all doctors want or need to learn how to practise all five steps of EBM (asking, acquiring, appraising, applying, assessing).4,5 Indeed, most doctors consider themselves users of EBM, and surveys of clinicians show that only about 5% believe that learning all these five steps is the most appropriate way of moving from opinion based to evidence based medicine.4

Doctors can incorporate evidence into their practice in three ways.3,6 In a clinical situation, the extent to which each step of EBM is performed depends on the nature of the encountered condition, time constraints, and level of expertise with each of the steps. For frequently encountered conditions (such as unstable angina) and with minimal time constraints, we operate in the “doing” mode, in which at least the first four steps are completed. For less common conditions (such as aspirin overdose) or for more rushed clinical situations, we eliminate the critical appraisal step and operate in the “using” mode, conserving our time by restricting our search to rigorously preappraised resources (such as Clinical Evidence). Finally, in the “replicating” mode we trust and directly follow the recommendations of respected EBM leaders (abandoning at least the search for evidence and its detailed appraisal). Doctors may practise in any of these modes at various times, but their activity will probably fall predominantly into one category.

The various methods of teaching EBM must therefore address the needs of these different learners. One size cannot fit all. Similarly, if a formal evaluation of the educational activity is required, the evaluation method should reflect the different learners' goals. Although several questionnaires have been shown to be useful in assessing the knowledge and skills needed for EBM,7,8 we must remember that learners' knowledge and skills targeted by these tools may not be similar to our own. The careful identification of our learners (their needs and learning styles) forms the first dimension of the evaluation framework that we are proposing.

What is the intervention?

The five steps of practising EBM form the second dimension of our evaluation framework. But what is the appropriate dose and formulation? If our learners are interested in practising in the “using” mode, our teaching should focus on formulating questions, searching for evidence already appraised, and applying that evidence. Evaluation of the effectiveness of the teaching should exclusively assess these steps. In contrast, doctors interested in practising in the “doing” mode would receive training in all five steps of practising EBM, and the evaluation of the training should reflect this.

Published evaluation studies of teaching EBM show the diversity of existing teaching methods. Some evaluation studies use an approach to clinical practice, whereas others use training in one of the skills of EBM such as searching Medline9 or critical appraisal.10 Indeed, one review of 18 reports of graduate medical education in EBM found that the courses most commonly focused on critical appraisal skills, in many cases to the exclusion of other necessary skills.11 Some studies have looked at 90 minute workshops whereas others included courses that were held over several weeks to months, thereby increasing the “dose” of teaching. Evaluation instruments should be tailored to the dose and delivery method, thereby assessing outcomes and behaviours that are congruent with the intended objectives.

What are the outcomes?

Effective teaching of EBM will produce a wide range of outcomes. Various levels of educational outcomes could be considered, including attitudes, knowledge, skills, behaviours, and clinical outcomes. The outcome level (the third dimension of the conceptual framework) reflects Miller's pyramid for evaluating clinical competence12 and builds on the competency grid for evidence based health care proposed by Greenhalgh.13 Changes in doctors' knowledge and skills are relatively easy to detect, and several instruments have been evaluated for this purpose.7,8 However, many of these instruments primarily evaluate critical appraisal skills, focusing on the role of “doer” rather than “user.” A Cochrane review of critical appraisal teaching found one study that met the authors' inclusion criteria and that the course studied increased knowledge of critical appraisal.10 With our proposed framework, evaluation of this teaching course falls into the learner domain of “doing,” the intervention domain of “appraisal,” and the outcome domain of “knowledge.”

Changes in behaviours and clinical outcomes are more difficult to measure because they require assessment in the practice setting. For example, in a study evaluating a family medicine training programme, doctor-patient interactions were videotaped and analysed for EBM content.14 A recent before and after study has shown that a multi-component intervention including teaching EBM skills and providing electronic resources to consultants and house officers significantly improved their evidence based practice (Straus SE et al, unpublished data). With our proposed framework, evaluation of this latter teaching intervention would be categorised into the learner domain of “doing.” The intervention domains include all five steps of EBM, and the outcome domain would be “doctor behaviour.”

Implementing the evaluation framework

The EBM task force developed teaching workshops for practising doctors that focused on formulating questions and searching for and applying preappraised evidence. Because these workshops were unlike traditional workshops that focused on the five steps of practising EBM,15 we concluded that evaluation of these workshops must be different. We created an evaluation instrument to detect an effect on learners' EBM knowledge, attitudes, and skills.

When we applied the evaluation framework to our evaluation instrument we found that our learners' goals were different from what we were assessing (table 1). We found that we placed greater emphasis on the skills necessary for practising in the “doing” mode than those required in the “using” mode, whereas the intervention was targeted to improve “user” behaviour. Moreover, the assessment mirrored traditional evaluation methods, focusing on appraisal skills, with little attention paid to question formulation. Finally, we saw that our evaluation predominantly measured skills rather than behaviour. This reflection led us to redesign our evaluation instrument to more closely reflect the learning objectives. We also attempted to show how the evaluation framework could be used—how to move from a concept to actual use (table 2).

Table 1
Application of evaluation framework to SGIM EBM Task Force evaluation tool
Table 2
Application of the conceptual framework for formulating clinical questions

Limitations of this framework

Our model requires that teachers work with learners to understand their goals, to identify in what mode of practice they want to enhance their expertise, and to determine their preferred learning style. This simple model could be expanded to include other dimensions, including the role of the teacher and the “dose” and “formulation” of what is taught. However, our primary goal was to develop a matrix that was easy to use. Although we have applied this framework to several of the published evaluation instruments and have found it to be useful, others may find that it does not meet all of their requirements.

What's next?

While EBM teachers struggle with developing innovative course materials and evaluation tools, we propose a coordinated sharing of these materials in order to minimise duplication of effort. Using the proposed framework as a categorisation scheme, the task force is establishing an online clearinghouse to serve as a repository for evaluations of methods of teaching EBM including details on their measurement properties.2 Teachers will be able to identify evaluation tools that might be useful in their own setting, using the framework to target their needs.

There is still little evidence about the effectiveness of different teaching methods,1 and attempting to evaluate such teaching is challenging given the complexity of the learners, the interventions, and the outcomes. One way to help meet these challenges is to develop a collaborative research network to conduct multicentre, randomised trials of educational interventions. We invite interested colleagues to join us in developing this initiative and to create the clearinghouse for evaluation tools (

Summary points

There is little evidence about the effectiveness of different methods of teaching evidence based medicine

Doctors can practise evidence based medicine in one of three modes—as a doer, a user, or a replicator

Instruments for evaluating different methods of teaching evidence based medicine must reflect the different learners (their learning styles and needs), interventions (including the dose and formulation), and outcomes that can be assessed

Our framework provides only one way to conceptualise the evaluation of teaching EBM; many others could be offered. We hope that our model serves as an initial step towards discussion and that others will offer their suggestions so that we may work together towards improved understanding of the evaluation process and promote more rigorous research on the evaluation of teaching EBM.

Supplementary Material

Question samples:


An external file that holds a picture, illustration, etc.
Object name is webplus.f1.gifSample questions from the task force's summative evaluation tool appear on

The members of the SGIM EBM Task Force included: Rob Golub, Northwestern University, Chicago, IL; Michael Green, Yale University, New Haven, CT; Robert Hayward, University of Alberta, Edmonton, AB; Rajesh Mangrulkar, University of Michigan, Ann Arbor, MI; Victor Montori, Mayo Clinic, Rochester, MN; Eduardo Ortiz, DC VA Health Centre, Washington, DC; Linda Pinsky, University of Washington, Seattle, WA; W Scott Richardson, Wright State University, Dayton OH; Sharon E Straus, University of Toronto, Toronto, ON. We thank Paul Glasziou for comments on earlier drafts of this article.

Funding: SES is funded by a Career Scientist Award from the Ministry of Health and Long-term Care and by the Knowledge Translation Program, University of Toronto. DSB is funded in part by the Robert Wood Johnson Foundation Generalist Physician Faculty Scholars Program.

Competing interests: None declared.


1. Hatala R, Guyatt G. Evaluating the teaching of evidence-based medicine. JAMA 2002;288: 1110-2. [PubMed]
2. Society of General Internal Medicine. Evidence based medicine. (accessed 1 Oct 2004).
3. Sackett DL, Straus SE, Richardson WS, Rosenberg WMC, Haynes RB. Evidence-based medicine: how to practice and teach EBM. London: Churchill Livingstone, 2000.
4. McColl A, Smith H, White P, Field J. General practitioners' perceptions of the route to evidence-based medicine: a questionnaire survey. BMJ 1998;316: 361-5. [PMC free article] [PubMed]
5. McAlister FA, Graham I, Karr GW, Laupacis A. Evidence-based medicine and the practicing clinician: a survey of Canadian general internists. J Gen Intern Med 1999;14: 236-42. [PMC free article] [PubMed]
6. Straus SE, McAlister FA. Evidence-based medicine: a commentary on common criticisms. CMAJ 2000;163: 837-41 [PMC free article] [PubMed]
7. Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer H, Kunz R. Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ 2002;325: 1338-41. [PMC free article] [PubMed]
8. Ramos KD, Schafer S, Tracz SM. Validation of the Fresno test of competence in evidence based medicine. BMJ 2003;326: 319-21. [PMC free article] [PubMed]
9. Rosenberg WM, Deeks J, Lusher A, Snowball R, Dooley G, Sackett D. Improving searching skills and evidence retrieval. J R Coll Physicians Lond 1998;32: 557-63. [PubMed]
10. Parkes J, Hyde C, Deeks J, Milne R. Teaching critical appraisal skills in health care settings. Cochrane Database Syst Rev 2001;(3): CD001270. [PubMed]
11. Green ML. Graduate medical education training in clinical epidemiology, critical appraisal and evidence-based medicine: a critical review of curricula. Acad Med 1999;74: 686-94. [PubMed]
12. Miller GE. The assessment of clinical skills/competency/performance. Acad Med 1990;65(9 suppl): S63-7. [PubMed]
13. Greenhalgh T, Macfarlane F. Towards a competency grid for evidence-based practice. J Eval Clin Pract 1997;3: 161-5. [PubMed]
14. Ross R, Verdieck A. Introducing an evidence-based medicine curriculum into a family practice residency—is it effective? Acad Med 2003;78: 412-7 [PubMed]
15. Kunz R, Fritsche L, Neumayer HH. Development of quality assurance criteria for continuing education in evidence-based medicine. Z Arztl Fortbild Qualitatssich 2001;95: 371-5. [PubMed]

Articles from The BMJ are provided here courtesy of BMJ Publishing Group