Clinical decision support (CDS) can assist providers in effectively managing patient care. Kuperman et al
describe medication-related decision support as a type of CDS which offers clinicians guidance for appropriate prescribing in several areas including: drug–allergy checking; dosing guidance; formulary decision support; duplicate therapy checking; drug–drug interaction (DDI) checking; medication-associated laboratory testing; checking of drug–disease interactions and contraindications; and advanced drug–pregnancy alerting.1
Such guidance can be provided to users through either interruptive alerts or non-interruptive reminders.
When implemented appropriately and accepted by users for prescribing guidance, medication-related decision support can reduce adverse-drug-event (ADE) rates.2–5
Bates et al
found that the rate of preventable ADEs decreased from 2.9/1000 patient-days in the absence of medication-related decision support and computerized physician order entry to 1.1/1000 patient-days in the presence of medication-related decision support used in conjunction with computerized physician order entry.6
In a study by Weingart et al
, an expert panel was convened to evaluate the extent to which patient harm was avoided through the use of medication-related decision support and DDI alerts. Of the 402 avoided ADEs, 12.2% were serious, 31.1% were significant, and 56.7% were minor.5
While these results support the positive effects of medication-related decision support on patient safety, it has yet to realize its full potential. This is mainly due to the poor acceptance of warnings among users, as 50% to 95% of drug safety warnings are over-ridden.7
Several approaches have been applied to increase alert acceptance. As preliminary steps, van der Sijs et al
suggest that the number of inappropriate alerts should be reduced and that alerts should be selectively delivered to the appropriate provider. Subsequent steps should address the sensitivity, usefulness, and usability of alerts.7
Shah et al
modified a commercial knowledge base to reflect only the most clinically relevant and up-to-date knowledge responsible for triggering DDI alerts, and they decreased the number of interruptive alerts by 71%. Excluding alerts of the highest severity that were implemented as hard-stops, the remaining interruptive alerts resulted in the cancelation (19%) and modification (48%) of orders, indicating a 67% alert acceptance rate.8
Modifying or canceling the intended order is defined as an acceptance of alert logic, while continuing with an order unchanged after receiving an alert is defined as an over-riding of the alert. In addition to the above-mentioned methods for decreasing alert fatigue, usability and human-factors concepts that facilitate the appropriate use of visual and text-based information within alerts should be utilized for efficient cognitive processing by the user.9
While human factors have been well studied in the general safety literature, few studies have addressed human factors and the design of clinical information systems empirically. In a viewpoint paper, Karsh et al
stressed the importance of drawing on human-factors engineering to improve the usability of health information technology, calling for such research in the domain of patient safety.9
Since information presentation affects user behavior and decision-making, information architecture and graphic interface design demand careful consideration. Design details such as navigation from one window to the next, information placement, font size, information similarity, and perceived credibility can significantly affect the user's actions. Saleem et al
affirm that the user interface stands between clinical information systems and the expected outcome of enhanced patient safety.10
Improving the design of the human–computer interface through the application of human-factors engineering has the potential to enable users to optimally utilize these systems.
The current literature provides little in the way of tools for assessing the compliance of medication alert design interfaces with commonly employed human-factors principles. In a recent study, Phansalkar et al
identified 11 key human-factors principles that should be considered when designing medication-related decision support alerts.11
In the current study, we combined these to build the Instrument for Evaluating Human-Factors Principles in Medication-Related Decision Support Alerts (I-MeDeSA) in order to assess the extent to which a given interface design incorporates these human-factors principles. We then gathered preliminary evidence for the validity of the instrument by testing it on three medication-related decision-support systems. This paper discusses the results of the development and preliminary validation of I-MeDeSA, an instrument created for system developers to improve alert design and to inform an organization's purchase of a usable electronic medical record (EMR) system.