|Home | About | Journals | Submit | Contact Us | Français|
Evidence based medicine has been defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”1 Few areas of medical practice have felt the effects of this movement more clearly than prescribing. Until recently doctors could prescribe medicines without worrying that their choices might be judged against evidence accumulated in the world's literature. Now, prescribers are increasingly expected to back up their decisions with evidence.2 Enthusiasm for evidence based prescribing is welcome and should lead to safer and more effective use of medicines. But it also poses some real problems for prescribers.
Reliable information to underpin everyday prescribing decisions at the point of prescription is hard to find. One solution is to provide modern information technology systems in the consulting room or at the bedside.3 But even these may deliver too much unfiltered information including some original research, some guidance derived from research, and some unsubstantiated opinion. The modern prescriber has to decide which data are the most reliable, accurate, and representative of true evidence rather than conjecture.
What should the prescriber do, however, if he or she finds several apparently reliable sources giving differing advice about the same clinical problem? In this issue of the BMJ Vidal et al (p 263) compare the advice given in four respected prescribers' guides on adjusting in renal impairment the dosages of 100 commonly used drugs.4 They find that the four texts differ in their recommendations on dose and dosing interval, and even in their definition of renal impairment. They conclude that this variation is “remarkable,” as is the lack of detail about how the advice was reached, and describe the sources as “ill suited for clinical use.” These conclusions seem harsh and deserve further analysis.
Should we be surprised that respected texts vary? Probably not. Even when there is very good evidence—for example for managing hypertension—different experts may synthesise it to produce a variety of conclusions about optimal prescribing.5,6 Vidal et al focus on recommended adjustments in dose for a relatively small proportion of patients with a problem that is much rarer than hypertension. In more than half the instances of discrepant advice, the authors acknowledge that they could find no firm evidence despite prolonged searching of Medline.4 Clinicians often have no relevant scientific evidence on which to base a decision.7 Rapid accumulation of research findings and international efforts to sort and rationalise them systematically are closing some of these gaps in evidence, but new gaps will continue to appear. In the absence of unambiguous evidence covering all eventualities differences of opinion are inevitable, even among the most reliable sources of guidance.
Furthermore, should respected sources such as the British National Formulary (BNF) be expected to provide details about how they reach their advice? Three of the four texts compared in this study provide information relevant to much of the population on the use of several thousand medicines. Vidal et al focused on the prescribing of 100 drugs in circumstances that affect only a small proportion of people. Their call for clarification of the evidence behind the advice that interests them ignores the difficulties of providing similar backing for hundreds of thousands of other similar items of prescribing information. The task would be beyond most editorial groups.
Many items of prescribing information probably cannot yet be matched to primary evidence. Even when such evidence can be found, it is often inconclusive, inconsistent with other studies, irrelevant to clinical realities, or of poor quality. Systematic reviews solve some of these problems, but they too may reach varying recommendations because of differing designs.8 Most users of the BNF probably prefer a text that summarises best practice and does not describe the totality and complexity of evidence that goes into creating it. The BNF is probably better “suited for clinical use” because of its relative simplicity.
These caveats should not lessen our appetite for sound, evidence based recommendations for rational prescribing. Vidal et al are right to remind us that, where possible, such recommendations should be referenced and open to scrutiny. However, these ideals have to be seen in context. Most prescribers are probably willing to accept the advice provided by a trusted source in the knowledge that, if they want to see the existing evidence, they have relatively easy access to it through searches of Medline and other databases and resources such as Clinical Evidence.
Prescribing will always be too complex for all the answers to be evidence based and “grey zones”7 will always be there. Even when the best course of action seems clear, evidence has to be interpreted in the light of variables such as patients' comorbidities and drug interactions. To cope with these uncertainties, prescribers will still need a combination of clinical experience, common sense, and knowledge based on a firm grounding in the principles of clinical pharmacology.9,10