|Home | About | Journals | Submit | Contact Us | Français|
Nowadays few would argue against the need to base clinical decisions on the best available evidence. In practice, however, clinicians face serious challenges when they seek such evidence.
Research-based evidence is generated at an exponential rate, yet it is not readily available to clinicians. When it is available, it is applied infrequently. A systematic review1 of studies examining the information-seeking behaviour of physicians found that the information resource most often consulted by physicians is textbooks, followed by advice from colleagues. The textbooks we consult are frequently out of date,2 and the advice we receive from colleagues is often inaccurate.3 Also, nurses and other health care professionals refer only infrequently to evidence from systematic reviews in clinical decision-making.4,5
The sheer volume of research-based evidence is one of the main barriers to better use of knowledge. About 10 years ago, if general internists wanted to keep abreast of the primary clinical literature, they would have needed to read 17 articles daily.6 Today, with more than 1000 articles indexed daily by MEDLINE, that figure is likely double. The problem is compounded by the inability of clinicians to afford more than a few seconds at a time in their practices for finding and assimilating evidence.7 These challenges highlight the need for better infrastructure in the management of evidence-based knowledge.
Some experts suggest that clinicians should seek systematic reviews first when trying to find answers to clinical questions.8 Research that is synthesized in this way provides a base of evidence for clinical practice guidelines. But there are many barriers to the direct use by clinicians of systematic reviews and primary studies. Clinical practitioners lack ready access to current research-based evidence,9,10 lack the time needed to search for it and lack the skills needed to identify it, appraise it and apply it in clinical decision-making.11,12 Until recently, training in the appraisal of evidence has not been a component of most educational curricula.11,12 In one study of the use of evidence, clinicians took more than 2 minutes to identify a Cochrane review and its clinical bottom line. This resource was therefore frequently abandoned in “real-time” clinical searches.7 In another study, Sekimoto and colleagues13 found that physicians in their survey believed a lack of evidence for the effectiveness of a treatment was equivalent to the treatment being ineffective.
Often, the content of systematic reviews and primary studies is not sufficient to meet the needs of clinicians. Although criteria have been developed to improve the reporting of systematic reviews,14 their focus has been on the validity of evidence rather than on its applicability. Glenton and colleagues15 described several factors hindering the effective use of systematic reviews for clinical decision-making. They found that reviews often lacked details about interventions and did not provide adequate information on the risks of adverse events, the availability of interventions and the context in which the interventions may or may not work. Glasziou and colleagues16 observed that, of 80 studies (55 single randomized trials and 25 systematic reviews) of therapies published over 1 year in Evidence-Based Medicine (a journal of secondary publication), elements of the intervention were missing in 41. Of the 25 systematic reviews, only 3 contained a description of the intervention that was sufficient for clinical decision-making and implementation.
Those who publish and edit research-based evidence should focus on the “3 Rs” of evidence-based communication: reliability, relevance and readability. Evidence is reliable if it can be shown to be highly valid. The methods used to generate it must be explicit and rigorous, or at least the best available. To be clinically relevant, material should be distilled and indexed from the medical literature so that it consists of content that is specific to the distinct needs of well-defined groups of clinicians (e.g., primary care physicians, hospital practitioners or cardiologists). The tighter the fit between information and the needs of users, the better. To be readable, evidence must be presented by authors and editors in a format that is user-friendly and that goes into sufficient detail to allow implementation at the clinic or bedside.
When faced with the challenges inherent in balancing the 3 Rs, reliability should trump relevance, and both should trump readability.
Ideally resources become more reliable, relevant and readable as we move up the 5S pyramid. At the bottom of the pyramid are all of the primary studies, such as those indexed in MEDLINE. At the next level are syntheses, which are systematic reviews of the evidence relevant to a particular clinical question. This level is followed by synopses, which provide brief critical appraisals of original articles and reviews. Examples of synopses appear in evidence-based journals such as ACP Journal Club (www.acpjc.org). Summaries provide comprehensive overviews of evidence related to a clinical problem (e.g., gout or asthma) by aggregating evidence from the lower levels of relevant synopses, syntheses and studies.
Given the challenges of doing a good MEDLINE search, it is best to start at the top of the pyramid and work down when trying to answer a clinical question. At the top of the pyramid are systems such as electronic health records. At this level, clinical data are linked electronically with relevant evidence to support evidence-based decision-making. Computerized decision-support systems such as these are still rare, so usually we start at the second level from the top of the pyramid when searching for evidence. Examples at the second level include online summary publications, such as Dynamed (www.ebscohost.com/dynamed) and ClinicalEvidence (http://clinicalevidence.bmj.com/ceweb/index.jsp), which are evidence-based, frequently updated and available for a widening range of clinical topics. Online services such as Evidence-Updates (http://plus.mcmaster.ca/evidenceupdates), which include studies and syntheses rated for quality and relevance with links to synopses and summaries, have recently become available with open access.
Evidence-based information resources are not created equal. Users at any of the levels just described must ensure that evidence is reliable by being aware of the methods used to generate, synthesize and summarize it. They should know that just because a resource has references does not mean that it is evidence-based. And just because a resource uses “evidence-based” in its title does not mean that it is so. One publisher stated that sales can be enhanced by placing the term “evidence-based” in the title of a book (Mary Banks, Senior Publisher, BMJ Books, London, UK: personal communication, 2009). Rating scales that we find useful for evidence summaries and research articles are provided in Box 1 and Table 1.
Evidence-based medical texts
The following points could be used as a minimum checklist:
Meta-resources (e.g., listings or search engines for other resources)
Promoting specialized search methods and making high-quality resources for evidence-based information available may lead to more correct answers being found by clinicians. In a small study of information retrieval by primary care physicians who were observed using their usual sources for clinical answers (most commonly Google and UpToDate), McKibbon and Fridsma18 found just a 1.9% increase in correct answers following searching. By contrast, others who have supplied information resources to clinicians have found that searching increased the rate of correct answers from 29% to 50%.19 Schaafsma and colleagues20 found that when clinicians asked peers for answers to clinical questions, the answers they received were correct only 47% of the time; if the colleague provided supportive evidence, the correct answers increased to 83%.
Question-answering services by librarians may also enhance the search process. When tested in primary care settings, such a service was found to save time for clinicians, although its impact on decision-making and clinical care was not clear.21,22
Journals must provide enough detail to allow clinicians to implement the intervention in practice. Glasziou and colleagues16 found that most study authors, when contacted for additional information, were willing to provide it. In some cases, this led to the provision of booklets or videoclips that could be made available on a journal’s website. This level of information is helpful regardless of the complexity of the intervention. For example, the need to titrate the dose of angiotensin-converting-enzyme inhibitors and confusion about monitoring the use of these drugs are considered barriers to their use by primary care physicians, and yet such information is frequently lacking in primary studies and systematic reviews.23
Finally, journal editors and researchers should work together to format research in ways that make it more readable for clinicians. There is some evidence that the use of more informative, structured abstracts has a positive impact on the ability of clinicians to apply evidence24 and that the way in which trial results are presented has an impact on the management decisions of clinicians.25 By contrast, there are no data showing that information presented in a systematic review has a positive impact on clinicians’ understanding of the evidence or on their ability to apply it to individual patients.
Evidence, whether strong or weak, is never sufficient to make clinical decisions. It must be balanced with the values and preferences of patients for optimal shared decision-making. To support evidence-based decision-making by clinicians, we must call for information resources that are reliable, relevant and readable. Hopefully those who publish or fund research will find new and better ways to meet this demand.
This article has been peer reviewed.
Sharon Straus is the Section Editor of Reviews at CMAJ and was not involved in the editorial decision-making process for this article.
Competing interests: Sharon Straus is an associate editor for ACP Journal Club and Evidence-Based Medicine and is on the advisory board of BMJ Group. Brian Haynes is editor of ACP Journal Club and EvidenceUpdates, coeditor of Evidence-Based Medicine and contributes research-based evidence to ClinicalEvidence.
Contributors: Both of the authors contributed to the development of the concepts in the manuscript, and both drafted, revised and approved the final version submitted for publication.