We observed in a randomized clinical trial that a customized CPOE alert that required a provider response had no effect in reducing concomitant prescribing of NSAIDs and warfarin beyond that of the commercially available passive alert received by the control group.
To date, most studies that looked at the effectiveness of computerized decision support systems designed in particular to reduce medication errors of drug–drug interactions have reported only modest results.8
The principal reason for this was the low response by the providers to the automated alerts.33
Providers tend to override the alerts because they are perceived to be non-specific, lacking the providers' additional knowledge of the clinical situation for the specific patient context.18
One area for further research is the best way to display alerts to providers to make drug alerts more effective and reduce overriding. If alerts are ineffective because they are easy to ignore, then attention should be given to finding ways to amplify the effect of the alert. We focused on one feature of such displays—namely, the requirement that providers acknowledge the alert as compared to not requiring any active response. We found that this feature in itself as part of a computerized order entry system was not sufficient. The customized alert advising the clinician to order acetaminophen instead of NSAID in patients requiring warfarin and to acknowledge the alert was no more effective than the standard of care passive alert in reducing the undesired prescribing.
On the other hand, requiring more interaction from clinicians when alerts fire, in order to increase compliance, may be disruptive to the workflow and may have other unintended consequences. A recent trial by Strom et al42
to evaluate the effectiveness of a nearly ‘hard stop’ CPOE prescribing alert intended to reduce concomitant orders of warfarin and trimethoprim/sulfamethoxazole found it to be extremely effective in changing prescribing. However, this intervention precipitated treatment delays among patients in the intervention group needing immediate drug treatment, leading to early termination of the study. Of course, given our current results do not indicate that this additional burden on physicians leads to additional effectiveness, that too argues not to implement such alerts.
The strength of this study was its randomized design. The decision to randomize clinicians rather than patients was motivated by several considerations. First, since the order is written by these individuals, it is appropriate to consider each order by them as an opportunity for error. Second, each medical practitioner has a unique access code to use the electronic ordering system and the order system menu can be varied by individual user. In addition, we wanted to keep each practitioner in the same study group for the duration of the study to reduce contamination between the two groups. An additional strength is the large number of clinicians included in the study (approximately 2000).
A limitation of the study, perhaps resulting from the long duration of the intervention (15 months), is the possibility that the difference of effect between the intervention group and control group for concurrent orders of NSAIDs and warfarin may have diminished over time because residents and nurse practitioners usually work in teams and may work alongside other physicians that may be assigned to a different group in the study. It is common for RPs and NPs to discuss the care plan of patients, which include issues of ordering medications. Moreover, over time, the RPs and NPs from both the intervention group and control groups may have learned not to order NSAIDs with an active order of warfarin due to awareness of the alert. However, our analysis of time trend was not consistent with this being a problem.
Another limitation is that because the study took place over 15 months, all residents did not participate in the study for the same amount of time. Fluctuations in the effectiveness of the alert over time could perhaps be explained by new house-staff joining the study population and old house-staff leaving the training program. Nevertheless, the comparison between the study groups was a concurrent comparison regardless. Thus, the overall comparison should still be valid.
An important limitation is the absence of information on potential confounders that may have influenced the occurrence of alert overrides. For example, information on varying years of experience of the providers was not collected and the providers were not surveyed about the reasons why they overrode them. Information on the clinical conditions and severity triggering the alert was also not collected. This was felt unnecessary since the study was randomized. Further, collecting some of this information (eg, debriefing the subjects) could have had a substantial Hawthorne effect. However, this means this information was not available to shed light on subsets of subjects, assisting in understanding why the alerts were not followed.
Ultimately, it is not possible for us to determine without further studies why the customized alert was not more effective. The customized alert may have been too similar to the alert in the control group, although we do not think so, since our alert provided an alternative strategy and forced a response. The customized alert may have been flawed if it led to misunderstanding the meaning of the ‘acknowledge’ button. On the other hand, there are data suggesting that clinicians make risk-benefit calculations in deciding whether to accept alerts.27
Thus, it may be that the clinicians considered drug–drug interactions to be unimportant despite the stronger intervention, or considered the particular interaction that we selected to study unimportant given the ability to monitor INRs closely in hospitals. There is no way for us to determine this.
However, regardless of the mechanism, it is clear that the more invasive alert added nothing to the traditional one.