Health-care organizations are struggling with methods to improve the quality of care provided in a cost-efficient manner. Patient safety issues are primary concerns for health-care institutions and providers. Numerous examples have been published assessing the role of technology in assisting in these efforts [16
]. The vast majority of data on technological interventions is focused on the inpatient hospital setting, often at tertiary care institutions, usually with house staff (physicians in training) programs, and rarely looks at commercially available applications [17
]. Among geriatric patients, studies have shown a rate of 13.8 preventable ADEs per 1,000 person-years [23
]. At Brigham and Women's Hospital, Boston, Massachusetts, implementation of inpatient CPOE led to an 81% decline in non-missed-dose medication error rates overall, and an 86% reduction in the intensive care units [24
]. In an emergency department setting, computer-assisted prescriptions were more than three times less likely to contain errors than handwritten prescriptions [25
In contrast, this study at Denver Health looked at a very specific type of clinical-decision support system: the use of a rules technology to prevent drug–laboratory adverse drug events. The clinical-decision support application and rule knowledge were both obtained from commercial vendors, and the CPOE application was a commercially available application. The setting was unique in that it was a primary-care outpatient setting. Furthermore, faculty physicians, as compared to training physicians, entered the majority of orders.
The clinical outcome portions of the study focused on assessing the effect of clinical-decision support systems on changing ordering behavior and, ultimately, in reducing ADEs among the patients. The study was not designed to have an adequate sample size to detect statistical difference in ADEs, although there were non-statistically fewer ADEs during the intervention phase. The rules did demonstrate a significant ability to change the ordering behavior of the provider. The effect was modest in halting the ordering of the medication and appeared to be limited to occasions in which the alert presented an abnormal laboratory value, with almost a doubling in order cessation. Still, the provider continued with ordering the medicine despite a warning message in the vast majority (91.7%) of the orders. This may be due to the providers deciding that the benefits of the medication far outweighed potential adverse effects on the associated laboratory abnormalities. In contrast, across all medication orders, and all categories of rules, ordering of the appropriate rule-associated laboratory test increased significantly (33% increase) with the presentation of an alert. The strongest effect was when providers where alerted to “missing” laboratory results (42% increase). Similar results have been found by Galanter et al. [26
] when looking at automated safety alerts interactions between digoxin and potassium. In their study, checking for unknown potassium values increase from 9% to 57% after implementation of alerts. Likewise, in a community-based intervention by Hoch et al. [27
], computerized alerts for missing potassium values sent the day after physicians had ordered a diuretic led to a 9.8% increase in potassium testing. Our study differed from these studies in that we looked at numerous medications across different therapeutic categories.
There was less of an effect on ordering behavior when the alert informed the provider of the existence of an abnormal laboratory value (23% increase in ordering of the test). This may imply that the cutoff values for the “abnormal” trigger were set too low, and that providers felt that repeating the laboratory test was not warranted given the degree of the abnormality. Further analysis, looking at the severity of the laboratory abnormality and correlating that to ordering behavior, may provide more insight to this issue.
There are various limitations to this study. The intervention only focused on a group of select drug–laboratory interactions and thus the results may not be generalizable to other types of interventions focusing on other patient-care issues. Further, the setting was in a single primary-care clinic outpatient setting within a large public-health integrated health-care delivery system and results may be different in other settings such as hospitals and private physician offices. The patient population served is primarily a lower income, minority-dominated (~80% Hispanic), and medically underserved population. Different results may be obtained with a more affluent patient population. The study did not consider alert effectiveness based on the role of the provider. Further studies would be needed to determine if the provider role (i.e., staff physician, house staff, or nurse practitioner) may alter the effect of the alerts. Finally, as an evaluation of an intervention, the intervention was not randomized. Changes observed may have been occurring in the health-care environment irrespective of the intervention. The investigators are aware of no local or national initiatives to improve the care of these patients for the rule-associated conditions.
We conclude that with private–public entity collaboration, rules for drug–laboratory interactions can be encoded into computerized clinical applications in primary-care clinics within an integrated health-care delivery, safety-net institution. Further, with the use of clinical-decision support, providers will more often stop the ordering of medications when alerted to potential drug–laboratory interactions and will order more appropriate medication-associated laboratory tests. There may be an effect on ADEs. Future larger, more prolonged studies will help to determine the full relationship between automated alerts for drug–laboratory interactions and the related clinical outcomes of adverse drug events.