In 1999, the Institute of Medicine published a report critical of the unacceptable rate of medical errors in American medicine.19
In that report, computer information systems were proposed as major tools for reducing errors. In 2001, the Institute of Medicine published its first follow-up report on medical errors.1
This report states, “Research on the quality of care reveals a picture of a system that frequently falls short in its ability to translate clinical knowledge and technology into practice.” We rigorously tested the effect of an intensive, computer-based, patient-specific intervention targeting physicians and pharmacists to reduce errors of omission among patients with heart disease. Although opportunities to improve care were common, the intervention had no measurable effect on either adherence to the evidence-based guidelines or any clinical or subjective patient outcome. This disappointing result occurred in an academic primary care practice where prior computer-based decision support interventions have repeatedly improved physicians' compliance with outpatient preventive care,21,55,56
inpatient preventive care,57
and inpatient drug-monitoring guidelines.58
Given the lack of physicians' enthusiasm for computer-based chronic care suggestions, we must conclude that they rebelled at the notion of the computer telling them how to manage their patients. In another study of a similarly unsuccessful computer-based guideline intervention, the median number of times physicians accessed the decision support system was zero, despite substantial work to make it user friendly.59
It is possible that physicians and pharmacists found the intervention intrusive and time consuming. Both were required to look at the intervention messages, but pressing the “escape” key erased the messages from the computer screen. A similar previous inpatient preventive care intervention also had no effects on patient care,60
but simply disabling the “escape” key made the same intervention remarkably effective.57
In both interventions, the workstations delivered similar suggestions; they could not force the physicians to read or act upon them. However, by forcing some action (either acting upon them or explaining why not), the inpatient intervention became quite effective. This is similar to prior findings in our outpatient, paper-based preventive care reminders56
and suggests that the intervention in the current study would have been more powerful if it required physicians to either comply with each suggestion or document their reasons for declining.
Physicians received feedback in the same fashion based on other guidelines during this study, and we estimate that they received feedback from this study once during each half-day they practiced, which ensured they were familiar with the way the feedback was delivered but that they were not “overwhelmed” by the frequency of reminders. In addition, the large number of physicians at multiple levels of training that were included in the study ensured that a broad array of physician characteristics and practice styles were included.
An alternative to forcing physicians to respond is rewarding them for complying with practice guidelines. However, studies of such rewards have had mixed results. Martin et al.61
and Hillman et al.62
found no effect of financial incentives, while Kouides et al. showed that a financial incentive significantly (if modestly) increased influenza vaccination rates.63
Since the advent of managed care, in which physician performance is linked to financial rewards and/or punishments, such incentives should be reassessed.
The pharmacist intervention also had no effects on patients' care or their outcomes. However, pharmacists could not write orders for patients; they could only make suggestions to the primary care physicians either directly or through their patients. Even though we encouraged the pharmacists to make care suggestions directly to the physicians and provided a simple e-mail system for doing so, these interactions rarely took place. If the intervention increased pharmaceutical care discussions between pharmacists and patients (which we could not measure), it had no effect on patients' satisfaction with their pharmacists.
Finally, basic provider attitudes toward guidelines in general may have to change before realizing any salutatory effects on care. One untested approach would be to allow the physicians to identify those guidelines with which they would like to comply and allow them to design both the thresholds for intervening and the manner in which they would like to receive care suggestions. In such instances, the intervention messages might be seen by the physicians as self-reminders and hence augment adherence.
Despite the negative results of this intensive, interactive intervention, we do not suggest that evidence-based guidelines and computer-based decision support cannot or should not be pursued as tools for a quality improvement. The recent emphasis on reducing medical errors19
mostly targets errors of commission (i.e., doing the wrong thing), whereas errors of omission (i.e., not doing the right thing) may be substantially more prevalent.2–6
Focusing interventions on errors of omission may thus provide substantially larger opportunities to improve patients' care and outcomes. However, expense and sometimes intrusive interventions should be carefully studied before being implemented in busy practices.