|Home | About | Journals | Submit | Contact Us | Français|
Studies that have looked at the effectiveness of computerized decision support systems to prevent drug–drug interactions have reported modest results because of low response by the providers to the automated alerts.
To evaluate, within an inpatient computerized physician order entry (CPOE) system, the incremental effectiveness of an alert that required a response from the provider, intended as a stronger intervention to prevent concurrent orders of warfarin and non-steroidal anti-inflammatory drugs (NSAIDs).
Randomized clinical trial of 1963 clinicians assigned to either an intervention group receiving a customized electronic alert requiring affirmative response or a control group receiving a commercially available passive alert as part of the CPOE. The study duration was 2 August 2006 to 15 December 2007.
Alert adherence was compared between study groups.
The proportion of desired ordering responses (ie, not reordering the alert-triggering drug after firing) was lower in the intervention group (114/464 (25%) customized alerts issued) than in the control group (154/560 (28%) passive alerts firing). The adjusted OR of inappropriate ordering was 1.22 (95% CI 0.69 to 2.16).
A customized CPOE alert that required a provider response had no effect in reducing concomitant prescribing of NSAIDs and warfarin beyond that of the commercially available passive alert received by the control group. New CPOE alerts cannot be assumed to be effective in improving prescribing, and need evaluation.
Medication errors are a leading cause of death in American hospitals.1 Several studies have shown significant effects on medication errors with the use of computer-based reminders in an order entry system.2–4 However, the effect of computer-based advice in these studies is highly variable 5–14 and little is understood about the determinants of this variability. Recent studies have shown that computer-based recommendations for drug–drug interactions are overridden as much as 90% of the time.15–17 There are several characteristics of the alert itself that may affect the likelihood that the alert will be followed. Such factors as the nature of the alerted condition (eg, the perception by ordering clinicians that allergy alerts are more important than drug interaction alerts),18 19 how the alert integrates into clinical workflow (eg, an excessive volume of alerts interferes with workflow),20 and whether or not the alert is associated with a user's action (eg, a drug interaction alert may not be relevant if the patient had previously received the alerted medication without complication)19 have all shown some impact on the likelihood that the advice will be followed.21 22 But a great many questions still remain on how to create the most effective tools for giving advice to practicing clinicians.
Commercial computerized provider order entry (CPOE) systems are commonly distributed with built-in programs that provide simple alerts to the providers. These simple alerts take the form of message boxes with a warning. The fact that many alerts may fire simultaneously reduces the likelihood that any individual alert will be followed. Since most alerts do not require an affirmative response by the prescriber, and this approach has been shown to be ineffective,6 the intervention in our study was a customized electronic alert that specifically tested the effectiveness of requiring clinicians to acknowledge the alert.
Warfarin management is complex because of the need for tight control of thromboplastins that determine the international normalized ratio (INR) and the many potential interactions between it and other drugs. There are so many potential interactions, in fact, that it is nearly impossible for physicians to remember all of them at any given time. Different drug interactions may have different propensities for causing adverse drug events of varying severity, but there is clear evidence that non-steroidal anti-inflammatory drugs (NSAIDs) given in conjunction with warfarin can increase bleeding risk by as much as fivefold compared to each drug alone.23 Most authors recommend generally avoiding NSAIDs when a patient is also receiving warfarin.24
Randomized clinical trials in the inpatient setting that specifically required the clinicians' response to an alert found these alerts to be effective. However, studies mostly focused on alert compliance with reminder guidelines for medical care25–28 rather than alerts to reduce drug–drug interactions. Studies in the outpatient setting focus on alerts to reduce inappropriate prescribing, primarily to the elderly.29–32
A common attribute of alert designs that require physician response is that the computer window cannot be closed without forcing physicians to choose among several decision options. One study27 provided intervention physicians with a pop-up window with automated guideline reminders to order tests or treatments and required them to accept, reject, or modify the suggested guidelines. This study found improved adherence to recommended guidelines, although adherence declined over time. Another trial26 compared providers receiving computer reminders to perform screening tests and required to select a response (‘done/order today,’ or ‘not applicable to this patient,’ or ‘patient refused,’ or ‘next visit’) to controls receiving similar reminders but without a required response. Again, more frequent compliance with reminders was seen among the intervention physicians.26 Another study28 activated and de-activated over a period of several weeks an automated decision support system delivering more conservative recommendations for psychotropic medications for hospitalized elderly patients. The intervention screen similarly required an explicit decision choice (‘change order to recommended medication’, ‘proceed with this order’, or ‘cancel this order’). The authors reported that prescriptions for every class of psychotropic drugs agreed with the system recommendations more frequently during the intervention periods.
An interesting study by Paterno et al5 used inpatient alert log data to rank drug–drug interactions by severity and used several different displays to present this information to the clinicians. Serious alerts required clinicians either to cancel the order or discontinue a pre-existing drug order (‘hard stop alert’); less serious alerts also required action by the clinician, either to discontinue one of the drugs or to select an override reason; the least serious alerts were presented as information only, requiring no action of any kind from the clinician. Tiering the presentation of drug–drug interaction alerts by severity level was seen to be associated with higher rate of compliance for alerts leading the authors to conclude that how alerts are prioritized and presented to the user may be as important as which alerts to deliver.5
In contrast, other studies did not find alerts to be effective—even those requiring the clinician's response.9 12 In their study of the characteristics of drug allergy overrides in a hospital CPOE system that required the clinician to enter a free-text reason for the override, Hsieh et al33 found that overrides were common. However, based on subsequent chart reviews, these overrides were mostly justified because of risk-benefit calculations for the particular patients. Similarly, in several Veterans Affairs medical centers, where entering a reason for the override is requested for critical drug–drug interactions, Grizzle et al12 found that more than half of the responses provided no clinical justification for the override.
Thus, as noted earlier, alerts are effective in some domains25–28 and ineffective in others. A recent review by Eslami et al34 and other reviews35 36 also show that the impact of CPOE systems is mixed. However, a majority of studies were not randomized, there were only a few studies specifically focused on alerts to prevent drug–drug interactions in the hospital setting, and the follow-up periods were too short to conclude that the effect was lasting.
This study was designed to test the effectiveness of a customized CPOE alert specifically requesting a response by the provider using the NSAID–warfarin interaction as a model. This is a common, well-accepted, potentially serious drug interaction and, indeed, is one of the few that are. It also was occurring in our hospital, despite the presence of the conventional alert, suggesting the need to try something new.
The study was conducted at the Hospital of the University of Pennsylvania (HUP) and the Penn Presbyterian Medical Center (PMC) where all inpatient orders are entered using the Sunrise Clinical Manager (SCM, Eclipsys Corporation, Atlanta, Georgia, USA) CPOE system.
The study was a randomized controlled trial of all 1865 resident physicians (RPs) and 98 nurse practitioners (NPs) involved in inpatient care. Attendings were not included as a unit of randomization since attendings at our hospitals do not normally enter the orders themselves; that is delegated to others.
The trial was active from 2 August 2006 to 15 December 2007. The study was approved by the University of Pennsylvania Committee on Studies Involving Human Beings, which waived the requirement for informed consent and HIPAA authorization.
The alert was activated whenever a RP or NP placed an order for a NSAID with an already active warfarin order, if warfarin was started for a patient already on NSAIDs, or when ordering both simultaneously. The intervention in this study was a newly formatted alert that prompted the provider to discontinue the NSAID and order acetaminophen as an alternative. The specific text of the customized alert was:
‘Prescription of warfarin and NSAID together is contra-indicated.
Please select an alternative drug, such as acetaminophen, by clicking on View Actions button.’
The provider could then either click the ‘acknowledge’ button to continue with the order for a NSAID, or could click on the ‘view action’ button to discontinue the current order of a NSAID and start a new order for acetaminophen.
The control group received the standard passive alerts in the form of a message box warning the provider not to prescribe the combination drugs. A response was not required to allow the combination of drugs to be prescribed.
The primary outcome was a new concurrent prescription order for NSAIDs and warfarin accepted through the electronic ordering system. In this context, not reordering the alert-triggering drug within 10 minutes after the alert fired represented a desired clinical response.
The unit of analysis for this study was the alert that fired, or would have fired, whenever a prescription order for concurrent NSAID and warfarin was encountered during the inpatient stay. Alerts were attributed to the study group of the provider who ordered the second prescription— that is, the prescription that triggered the alert.
A 5-minute rule was applied to counting the total number of alerts, given that some alerts kept firing multiple times immediately one-after-another because providers, despite the customized alert, attempted to re-enter the order for the triggering medication to ensure that the order was processed. Accordingly, alerts firing within ≤5 minutes of each other were counted as a single episode, whereas alerts firing within >5 minutes of each other were counted as separate episodes.
If the alert-triggering drug was not reordered within an interval of 10 minutes after the alert fired, it was counted as a ‘desired clinical response’ (ie, not having active concurrent orders for warfarin and NSAID after the alert had fired). As a secondary analysis, we also investigated a 24-hour interval after alert firing.
Figure 1 shows the subject randomization flow, including the 1963 providers in the analysis (960 intervention providers receiving the customized alerts and 1003 control providers receiving the standard-of-care alerts). Overall there were 1218 alerts. After applying the 5-minute rule (namely, counting as a single event alerts that fired within ≤5 minutes of each other), there were 1024 alerts: 464 in the intervention group and 560 in the control group.
The proportion of desired ordering responses (ie, not reordering the alert-triggering drug within 10 minutes of firing) was lower in the intervention group (25% (114/464) customized alerts issued) than in the control group (28% (154/560) passive alerts firing). The odds of inappropriate ordering by the intervention providers was adjusted OR 1.22 (95% CI 0.69 to 2.16; p=0.48), indicating that the modified alert had failed to prevent the concomitant ordering of warfarin and NSAIDs (see table 1).
These results were adjusted for provider type (RP or NP) and for hospital (HUP or PMC) as potential confounders and accounting for clustering by provider (ie, adjusting for the non-independence of outcomes in patients within the same provider). It is of interest to note the high mean number of alerts per provider (3.5 vs 4.5 in the intervention and control groups, respectively) even after counting alerts triggered by repeated orders within ≤5 minutes as a single alert event. The 25th, 50th, and 75th percentiles of the number of alerts triggered by offending orders per provider were 1, 2, and 4, respectively, in each group.
The results from the secondary analysis based on behavior within 24 hour after an alert had fired were not substantively different and are not presented.
There was a trend over the 17-month study period with decreasing proportion of desired ordering responses over time (OR 1.09 per month; p=0.007). There was also a significant month-by-study group interaction (p=0.04). The interaction was the result of multiple crossovers in the desired ordering rates between the two groups over time. Overall, there was no group difference.
We observed in a randomized clinical trial that a customized CPOE alert that required a provider response had no effect in reducing concomitant prescribing of NSAIDs and warfarin beyond that of the commercially available passive alert received by the control group.
To date, most studies that looked at the effectiveness of computerized decision support systems designed in particular to reduce medication errors of drug–drug interactions have reported only modest results.8 9 18 40 The principal reason for this was the low response by the providers to the automated alerts.33 41 Providers tend to override the alerts because they are perceived to be non-specific, lacking the providers' additional knowledge of the clinical situation for the specific patient context.18
One area for further research is the best way to display alerts to providers to make drug alerts more effective and reduce overriding. If alerts are ineffective because they are easy to ignore, then attention should be given to finding ways to amplify the effect of the alert. We focused on one feature of such displays—namely, the requirement that providers acknowledge the alert as compared to not requiring any active response. We found that this feature in itself as part of a computerized order entry system was not sufficient. The customized alert advising the clinician to order acetaminophen instead of NSAID in patients requiring warfarin and to acknowledge the alert was no more effective than the standard of care passive alert in reducing the undesired prescribing.
On the other hand, requiring more interaction from clinicians when alerts fire, in order to increase compliance, may be disruptive to the workflow and may have other unintended consequences. A recent trial by Strom et al42 to evaluate the effectiveness of a nearly ‘hard stop’ CPOE prescribing alert intended to reduce concomitant orders of warfarin and trimethoprim/sulfamethoxazole found it to be extremely effective in changing prescribing. However, this intervention precipitated treatment delays among patients in the intervention group needing immediate drug treatment, leading to early termination of the study. Of course, given our current results do not indicate that this additional burden on physicians leads to additional effectiveness, that too argues not to implement such alerts.
The strength of this study was its randomized design. The decision to randomize clinicians rather than patients was motivated by several considerations. First, since the order is written by these individuals, it is appropriate to consider each order by them as an opportunity for error. Second, each medical practitioner has a unique access code to use the electronic ordering system and the order system menu can be varied by individual user. In addition, we wanted to keep each practitioner in the same study group for the duration of the study to reduce contamination between the two groups. An additional strength is the large number of clinicians included in the study (approximately 2000).
A limitation of the study, perhaps resulting from the long duration of the intervention (15 months), is the possibility that the difference of effect between the intervention group and control group for concurrent orders of NSAIDs and warfarin may have diminished over time because residents and nurse practitioners usually work in teams and may work alongside other physicians that may be assigned to a different group in the study. It is common for RPs and NPs to discuss the care plan of patients, which include issues of ordering medications. Moreover, over time, the RPs and NPs from both the intervention group and control groups may have learned not to order NSAIDs with an active order of warfarin due to awareness of the alert. However, our analysis of time trend was not consistent with this being a problem.
Another limitation is that because the study took place over 15 months, all residents did not participate in the study for the same amount of time. Fluctuations in the effectiveness of the alert over time could perhaps be explained by new house-staff joining the study population and old house-staff leaving the training program. Nevertheless, the comparison between the study groups was a concurrent comparison regardless. Thus, the overall comparison should still be valid.
An important limitation is the absence of information on potential confounders that may have influenced the occurrence of alert overrides. For example, information on varying years of experience of the providers was not collected and the providers were not surveyed about the reasons why they overrode them. Information on the clinical conditions and severity triggering the alert was also not collected. This was felt unnecessary since the study was randomized. Further, collecting some of this information (eg, debriefing the subjects) could have had a substantial Hawthorne effect. However, this means this information was not available to shed light on subsets of subjects, assisting in understanding why the alerts were not followed.
Ultimately, it is not possible for us to determine without further studies why the customized alert was not more effective. The customized alert may have been too similar to the alert in the control group, although we do not think so, since our alert provided an alternative strategy and forced a response. The customized alert may have been flawed if it led to misunderstanding the meaning of the ‘acknowledge’ button. On the other hand, there are data suggesting that clinicians make risk-benefit calculations in deciding whether to accept alerts.27 33 Thus, it may be that the clinicians considered drug–drug interactions to be unimportant despite the stronger intervention, or considered the particular interaction that we selected to study unimportant given the ability to monitor INRs closely in hospitals. There is no way for us to determine this.
However, regardless of the mechanism, it is clear that the more invasive alert added nothing to the traditional one.
In conclusion, a customized CPOE alert that required a provider response had no effect in reducing concomitant prescribing of NSAIDs and warfarin beyond that of usual standard of care. CPOE alerts cannot be assumed to be effective in improving prescribing and need to be formally evaluated.
The authors wish to thank the members of the University of Pennsylvania IRB Oversight Committee, Todd Barton, Jeffrey Staab, and Robert Gross for their valuable suggestions and guidance. In addition, the authors thank Jesse Chittams, Asaf Hanish, Qing Liu, and Catherine T Williams-Smith from the CCEB Biostatistics Analysis Center for creating the analytic data set. We thank also Maggie Massary, from the University of Pennsylvania Health System, for her work in developing the electronic alert and for guiding us through the elements of the electronic intervention and Sunrise data collection. We would like to thank Anne Saint John for her excellent assistance in preparing this manuscript. Finally, this paper has greatly benefited from comments from the JAMIA peer reviewers, especially regarding alternative explanations for why drug alerts may not be effective in our and other studies.
Funding: This study was funded partly by the University of Pennsylvania Health System, and partly by grant number U18HS016946 from the Agency for Healthcare Research and Quality.
Competing interests: None.
Provenance and peer review: Not commissioned; externally peer reviewed.