Delivering outstanding medical care requires providing care that is both high-quality and safe. However, while the knowledge base regarding effective medical therapies continues to improve, the practice of medicine continues to lag behind, and errors are distressingly frequent.1
Regarding the gaps between evidence and practice, Lomas et al.2
evaluated a series of published guidelines and found that it took an average of approximately five years for these guidelines to be adopted into routine practice. Moreover, evidence exists that many guidelines—even those that are broadly accepted—are often not followed.3,4,5,6,7
For example, approximately 50% of eligible patients do not receive beta blockers after myocardial infarction,8
and a recent study found that only 33% of patients had low-density lipoprotein (LDL) cholesterol levels at or below the National Cholesterol Education Program recommendations.5
Of course, in many instances, relevant guidelines are not yet available, but even in these instances, practitioners should consider the evidence if they wish to practice evidence-based medicine, and a core part of practicing evidence-based medicine is considering guidelines when they do exist.
Although we strive to provide the best possible care, many studies within our own institution have identified gaps between optimal and actual practice. For example, in a study designed to assess the appropriateness of antiepileptic drug monitoring, only 27% of antiepileptic drug levels had an appropriate indication and, among these, half were drawn at an inappropriate time.9
Among digoxin levels, only 16% were appropriate in the inpatient setting, and 52% were appropriate in the outpatient setting.10
Of clinical laboratory tests, 28% were ordered too early after a prior test of the same type to be clinically useful.11
For evaluation of hypothyroidism or hyperthyroidism, the initial thyroid test performed was not the thyroid-stimulating hormone level in 52% of instances.12
Only 17% of diabetics who needed eye examinations had them, even after visiting their primary care provider.13
The Centers for Disease Control and Prevention (CDC) guidelines for vancomycin use were not followed 68% of the time.14
Safety also is an issue: in one study, we identified 6.5 adverse drug events per 100 admissions, and 28% were preventable15
; for example, many patients received medications to which they had a known allergy. Clearly, there are many opportunities for improvement.
We believe that decision support delivered using information systems, ideally with the electronic medical record as the platform, will finally provide decision makers with tools making it possible to achieve large gains in performance, narrow gaps between knowledge and practice, and improve safety.16,17
Recent reviews have suggested that decision support can improve performance, although it has not always been effective.18,19
These reviews have summarized the evidence that computerized decision support works, in part, based on evidence domain. While this perspective has been very useful and has suggested, for example, that decision support focusing on preventive reminders and drug doses has been more effective than decision support targeting assistance regarding diagnosis, it does not tell one how best to deliver it.
In all the areas discussed above, we have attempted to intervene with decision support to improve care with some successes20,21
and many partial or complete failures.22,23,24
For the purposes of this report, we consider decision support to include passive and active referential information as well as reminders, alerts, and guidelines. Many others also have evaluated the impact of decision support,19,20
and we are not attempting to provide a comprehensive summary of how decision support can improve care but rather to provide our perspective on what worked and what did not.19
Thus, the goal of this report is to present generic lessons from our experiences that may be useful to others, including informaticians, systems developers, and health care organizations.