|Home | About | Journals | Submit | Contact Us | Français|
Evidence-based medicine (EBM) is an indispensable tool in clinical practice. Teaching and training of EBM to trainee clinicians is patchy and fragmented at its best. Clinically integrated teaching of EBM is more likely to bring about changes in skills, attitudes and behaviour. Provision of evidence-based health care is the most ethical way to practice, as it integrates up-to-date, patient-oriented research into the clinical decision making process, thus improving patients' outcomes. In this article, we aim to dispel the myth that EBM is an academic and statistical exercise removed from practice by providing practical tips for teaching the minimum skills required to ask questions and critically identify and appraise the evidence and presenting an approach to teaching EBM within the existing clinical and educational training infrastructure.
Evidence-based medicine (EBM), defined as the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients,1 provides clinicians with a means to integrate continuing education and practice improvement into the day-to-day work of their professional lives. Formally, itinvolves the process of acquiring, systematically reviewing, appraising and applying research findings to aid the delivery of optimum clinical care to patients. It does not exclude clinical expertise but actively encourages incorporating research evidence to make decisions. This is something we have always done as clinicians, except now we have much better tools (computers, internet) with which to harness evidence for EBM and it is increasingly becoming one of the major driving forces in the NHS. It has an impact on resource allocation, delivery and provision of healthcare, policy making and research.2 This means that education and training, both in postgraduate and continuing education, need to develop methods of teaching EBM to trainees.
Despite various criticisms, EBM provides us a way of keeping our practice up to date in the uncertain world of medicine.3 Not everybody needs to appraise evidence from scratch, but we all invariably need some skills in EBM to remain up-to-date with new research.4 The need to develop a curriculum outlining the minimum standard requirements for training health professionals in EBM is well recognized.5 The challenge is to engage the healthcare profession in learning EBM and making it part of routine clinical practice.
To teach EBM to trainees, it is crucial to keep the following principles of adult learning theory in mind – the trainees are responsible adults, and need to be taught as such:6
There is empirical evidence that the outcomes of teaching EBM differ markedly between undergraduates and postgraduates, with smaller gains in knowledge among the postgraduates. Adult learning theory suggests that the determinants of learning in the two groups are different, with postgraduate learning tending to be driven by self motivation and relevance to clinical practice, whereas undergraduate learning is generally driven by external factors such as curriculum and examinations.7
Empirical and theoretical research suggests that there is a hierarchy of teaching and learning activities in terms of their educational effectiveness:
Systematic reviews of teaching in EBM has shown that Level 3 activities are effective in improving only the knowledge base, and it is the clinically integrated Level 1 teaching activities that bring about changes in skill, attitude and behaviour.9 Accordingly, in this two-part series we suggest a step-by-step approach to integrate teaching of EBM into everyday clinical practice and training ( Box 1).
The first step in using EBM is identification of a knowledge gap, followed by formulation of a clinical question. Box 2 provides a series of tips for teaching question framing skills to trainees. The traditional method of bedside teaching involves exploring trainees' knowledge and exposing any deficiencies.
More compassionate teachers provide ready-made answers to fill the knowledge gaps before moving to the next patient. This makes matters worse: passive learning, or spoon-feeding of knowledge, is short term, with the information being lost soon after the ward round. How can we change this cycle of inefficient teaching and learning? Start by identifying knowledge gaps as learning opportunities. To have a lasting impact, senior clinicians should act as facilitators, encouraging trainees to learn actively. They should consciously foster a clinical environment where exposure of a knowledge gap is a cue for initiating the EBM steps outlined in Box 2. As mentioned in the principles of adult learning theory, this should be done sensitively, so that as adult learners, the trainees see this as step towards their own professional development rather than as a judgemental or belittling exercise.
The first step in evidence-based practice is to recognize that identification of a knowledge gap should lead the trainee to actively seek answers. Initiating this process is not easy. The teacher needs to help construct an answerable question, as without a clearly focused question, it is extremely difficult to find clinically useful answers to help with patient care. Questions arise regularly in busy clinical practice but they don't always get followed up because of clinical commitments or time constraints. Teachers need an approach that rapidly documents the questions and allocates it to the trainees to work on in their own time.
Figure 1 shows a PICO (Population, Intervention, Comparison, Outcome) structure, which is a useful tool that can be used quite easily to formulate a clearly focused clinical question. Moreover, the same tool can be used with advantage to follow the search strategy, as described in the next section.
PICO structures can be used as educational prescriptions which can be quickly handed over to the trainees during a ward round for them to follow up when an opportunity arises. For example, post-call rounds are a rich environment for producing clinical questions but the trainees are usually too exhausted during these rounds for this type of exercise. In those circumstances educational prescriptions can be used to record the question for ‘filling’ at a later date. An educational prescription not only specifies the clinical problem that generated the question but it also states the question, in all of its key elements. It sets a time frame (taking into account the urgency of the clinical problem) and specifies who is responsible for answering it. A sample of educational prescription is available at http://www.cebm.utoronto.ca/doc/edupres.pdf.10
Once trainees learn how to formulate a question, they need to learn what to search for, and how and importantly where to search for it. As we have seen in the principles of adult learning theory, trainees may come with varying degrees of prior experience of searching the evidence base. Searching for evidence is an art which trainees need to learn in order to get the optimum result by best using the limited time available. Teaching this art to trainees should be individualized, as those with prior experience may only need limited guidance whereas the less experienced need more in-depth teaching. Moreover, more experienced trainees should be encouraged to guide and help their peers in small group activities.
Trainees should be encouraged to find evidence summaries or guidelines issued by respectable professional organizations and prepared after taking into consideration the highest quality evidence available. Where such guidelines are not available, they should be advised to find relevant systematic reviews before turning to primary research. Systematic reviews focus on a single question, and identify, appraise, select and synthesize all randomized controlled trials relevant to the question. These are considered to be the highest level of evidence. Where there are no guidelines, evidence summaries or systematic reviews, individual studies should be searched going down the evidence levels ( Table 1).
It becomes extremely laborious and time consuming to look into available evidence without an effective search strategy. From the PICO structure, a number of terms can be selected for the search under each separate heading. There are two types of terms: natural language terms and terms from a controlled vocabulary. The latter are used in some databases to describe or index articles registered in the database; in the Medline database, for example, these index terms are called Medical Subject Headings (MeSH). The selected terms can then be combined with ‘OR’ or ‘AND’ – in this context, these are called Boolean operators. OR should be used to combine synonyms (terms with similar or related meaning); while AND should be used to combine headings ( Figure 2).
Systematic reviews can be found in databases such as the Cochrane Library and Medline. The Cochrane Database of Systematic Reviews (CDSR) is a collection of systematic reviews produced by the Cochrane Collaboration with explicit methodology and editorial procedures.
PubMed is another widely used search engine; however, without the proper skills searching PubMed can be a frustrating experience. Trainees should be encouraged to use the Clinical Queries section for more rewarding and time-efficient results; this is a short-cut built into the database which searches for systematic reviews. Box 4 provides a variety of search tips for trainees.
Critical appraisal is the first step in transferring research knowledge into practice. It involves systematically examining research evidence to assess its validity, results, relevance, impact and applicability before using it to inform a decision. Randomized controlled trials and systematic reviews are the highest levels of evidence but they are not automatically of good quality and should always be appraised critically.
There is a misconception that critical appraisal is too mathematical and remote from clinical practice. One of the main challenges of teaching EBM is the fact that it involves basic principles of epidemiology and statistics, both repellent to many doctors.3 But it need not necessarily be so; the facilitator should always be aware of this attitude, and encourage trainees to overcome their inhibitions. From the adult learning theories we know that adults will commit to learning when the goals and objectives are considered realistic and important to them. Trainees as adult learners need to see that learning and their day-to-day activities are related, and that the learning is relevant. Setting up a journal club with the aim of presenting critically appraised topics or articles could be of great help at this stage. The experience of observing their colleagues critically appraising a report with confidence may act as a great incentive, not only to learn so that they can present a paper themselves, but also to see that critical appraisal is an achievable aim.
Three broad issues need to be considered when appraising research articles:
There are numerous quality checklists available but the following questions are the essential ones to ask of any randomized controlled trial:
The questions to consider for systematic reviews are:
A poorly conducted study which scores badly in these questions loses its validity and the results may not be worth considering. Trainees should be encouraged to go through a quality assessment checklist first and only look into the results if the answers are satisfactory.
The chances of a particular outcome occurring in an individual is called the risk event (risk may refer to something good or bad). By comparing the risk event between the experimental and the control group, a measure of relative risk can be calculated. A relative risk of 1 occurs when the incidences are the same in both groups. If the intervention is expected to lead to more of the outcome measured (e.g. reduction of symptoms) then relative risk would be more than 1; if the intervention is expected to lead to less of the outcome measured (e.g. complication) then the relative risk would be less than 1. The effectiveness of a therapy can be expressed with a relative risk.
There will always be some doubt about the result (or best estimate), as a trial only looks at a sample population. The confidence interval indicates the range of doubt around the best estimate. Results are often presented as P values. These describe the probability that a particular result has occurred by chance. If P<0.05, this is described as statistically significant: the results are unlikely to have happened by chance.
‘Odds’ refers to the ratio of the number of people experiencing an event to the number not experiencing the event. The odds ratio compares the ratio of the odds for people in an experimental group versus those in a control group; values greater than 1 indicate that intervention is better than control. If a 95% confidence interval is calculated, statistical significance is assumed if the interval does not include 1.
Table 2a shows a two-by-two table used to compare experimental and control groups' outcomes, while Table 2b provides details of the statistical calculations which can be performed using the data in Table 2a.
Number needed to treat (NNT) is a popular measure of effectiveness of interventions as they are much easier to comprehend than some statistical descriptions. NNT can be calculated from raw data using a formula, from published odds ratios (using tables), or from relative risk reduction and expected prevalence (using a nomogram). Another way of calculating NNT is to use absolute risk reduction (ARR), which can be derived easily from a two-by-two table, such as that in Table 2a. NNT can be calculated from the ARR using the formula NNT = 1/ARR.
The 95% confidence intervals of the NNT are an indication that 19 times out of 20, the ‘true’ value will be in the specified range.
In a systematic review, meta-analysis is usually presented as a Forest plot ( Figure 3) that can be readily interpreted with these concepts. The Forest plot allows readers to see the information from the individual studies that went into the meta-analysis at a glance. It provides a simple visual representation of the amount of variation between the results of the studies, as well as an estimate of the overall result of all the studies together.13
The left-most column lists the studies (identified by the name of the first author and the year of publication), while the next two columns present the number of events in the intervention and control group for each included study. In a typical Forest plot, the results (point estimates) of included studies are shown as squares of the result of each study. The point estimates are usually presented as relative risk (RR) or odds ratio (OR). A horizontal line runs through the square to show its confidence interval, most commonly a 95% confidence interval.
The combined estimate from the meta-analysis and its confidence interval are put at the bottom, represented as a diamond. The centre of the diamond represents the combined point estimate, and its horizontal tips represent the confidence interval. The vertical line in the middle is the line of no effect. If a point estimate is on the line of no effect (i.e. relative risk 1), no difference in treatment effect between the two comparison groups has been shown in that study. If the diamond – the combined result – crosses the line of no effect, the result is not statistically significant (i.e. the treatment is no more effective than the control). Significance is achieved if the diamond is clear of the line of no effect. The horizontal line at the bottom is the scale measuring the overall treatment effect.
A study may be of good quality and the results may also be significant, but are they really relevant: do they apply to the patient or population under consideration? Could the population sample or the practice covered by the review be different from local population or practice? The senior clinician should always ask the trainees this question to evaluate applicability of the research finding to the clinical care of an individual patient or to the local population.
In this article we have suggested some practical points to facilitate teaching of the first three steps of EBM (i.e. formulating a clinical question, searching for evidence and critical appraisal; Box 5). We have shown that it is possible to teach trainees the elements of EBM within the existing training and educational infrastructure by thoughtful application of the principles of adult learning theory. In the second article of this series, we shall explore strategies that may be helpful in teaching steps 4 and 5 of EBM – the clinical integration of evidence into everyday clinical practice.
Competing interests None declared
Ethical approval Not required
Contributorship KD, SM and KSK jointly conceived the project. All of them contributed to the draft of the papers with KD contributing majority of Part 1 and SM contributing majority of Part 2. KSK supervised the project