Research has demonstrated that without the use of a computer, DDI knowledge is poor among health professionals, including both physicians and pharmacists.2
Drug-interaction screening, a CDS tool, has been a mainstay of pharmacy software packages for many years and is now available in many computerized prescriber order-entry systems. These provider- and pharmacist-based DDI CDS systems share many similarities, including their source of drug information: the knowledge bases are created, maintained, and sold by a small number of firms (eg, First DataBank, Wolters Kluwer Health, Cerner, Thomson Reuters Healthcare). However, each firm typically relies on their own classification system for interactions.
Recently, much attention has focused on features that constitute a useful CDS tool, in part, due to the Office of the National Coordinator for Health Information Technology's task of advancing the adoption of health technology.31
The general consensus, from a recent CDS workshop, is that information presented by CDS must be pertinent and beneficial, and prevent overburdening the user with irrelevant alerts.31
For instance, drug-interaction screening was specifically mentioned as an area in which the sensitivity and specificity of software settings may have a significant impact on the quality and number of alerts presented. In the current study, the researchers observed a high level of variability in the performance of pharmacy software systems and, more importantly, the failure of some systems to detect well-documented DDIs.
This CDS software evaluation study provides insight into the poor performance of pharmacy systems in alerting pharmacists of clinically significant drug interactions. Additionally, the current results may address broader public safety concerns associated with the manner in which potential DDIs are detected within CDS systems.
Metzger et al
evaluated the performance of electronic alerts arising from computerized prescriber order entry in hospitals further exemplifying potential safety issues associated with the use of CDS systems.32
They also utilized fictitious patient profiles to examine the ability of software systems to detect various medication safety issues. However, Metzger et al
examined a more comprehensive set of medication alerts, including but not limited to drug-allergy, drug-diagnosis, and therapeutic duplications. Despite differences in settings and types of alerts examined, both studies demonstrated significant variability in CDS system performance between and within vendors as well as the failure of some systems to detect clinically significant medication safety issues.
Suboptimal performance of the pharmacy DDI software systems in this study was confirmed, in part, by the failure of these systems to detect approximately one in seven clinically significant DDIs. The most poorly performing software system had a sensitivity of 0.23, meaning that approximately 77% of the DDIs evaluated would go undetected. Community pharmacies failed to detect approximately one in 12 clinically significant DDIs, while hospital pharmacy systems failed to detect approximately one in four DDIs. In addition, systems in other settings incorrectly categorized approximately one in seven of the DDIs evaluated. Based on the current study, it is evident that additional efforts are needed to improve the ability of pharmacy software systems to detect clinically significant DDIs.
Prior research on the accuracy and reliability of pharmacy software programs suggests that poor performance is due, in part, to the inability of these systems to warn pharmacists of potentially clinically significant DDIs.6
Cavuto et al
determined the likelihood that pharmacists would fill prescriptions for two medications whose concurrent use was contraindicated (terfenadine and ketoconazole).33
Of the 48 pharmacies with computerized DDI screening, approximately one-third of pharmacies filled the two prescriptions. Hazlet et al
evaluated nine community-pharmacy software programs and found that one-third of DDIs went undetected.7
Another study conducted by this research group in Tucson, Arizona evaluated pharmacy information systems in eight community pharmacies and five hospital pharmacies.6
For community pharmacies, the median sensitivity and specificity were 0.88 and 0.91, respectively; hospital pharmacies had a median sensitivity and specificity of 0.38 and 0.95, respectively.
The current study confirms continued variability in system performance across and within pharmacy organizations. This variability in CDS software system performance may be due, in part, to software customization by its users, at the pharmacist, pharmacy, or corporate level. Specifically, clinicians may ‘customize’ the software by suppressing certain categories or tiers of drug-interaction warnings in an attempt to minimize alert fatigue, a phenomenon caused by excessive warnings including irrelevant, non-significant, or repetitious alerts.11
Alert fatigue may compromise patient safety, especially if the CDS program presents excessive warnings (ie, low signal-to-noise ratios), thus causing clinician desensitization to warnings and even over-riding clinically significant warnings.9
The literature contains many studies documenting widespread dissatisfaction with alerts perceived as inappropriate, inconsequential, disruptive, or redundant and high rates (up to 89%) of ‘over-ridden’ DDI alerts.2
Despite well-documented research on issues with CDS program alerts, including the current study results, clinicians continue to face challenges when using this type of software.
The drug knowledge database may also be a source of variability in software performance; the database is integrated into the software and serves as a basis of its drug information. Currently, no universal standard exists for classifying the severity of drug interactions.40
Furthermore, many drug combinations have not been thoroughly studied; case reports, in vitro studies, and retrospective reviews are common in the drug-interaction literature.41
Consequently, research has demonstrated substantial variation among the major drug compendia regarding inclusion of drug interactions and assignment of severity levels to known interactions.40
In a comparison study of four drug-interaction compendia, only 2% of the major DDIs were included in all four compendia.40
The DDI knowledge bases tend to be highly inclusive with respect to drug-interaction alerts, focusing on the scope of drug-interaction alert coverage rather than the clinical significance and estimated rate of occurrence.8
The tendency for DDI knowledge bases to be more inclusive may be due, in part, to perceived potential for legal liability.
Compared with other previous relevant research, the methodology employed in the current study imparted several advantages. With the unit of analysis at the individual pharmacy level, no central corporate locations were included in the analysis. This design feature enabled researchers to examine more closely the variability within pharmacy chains. The relatively large number and variety of participating sites improved the generalizability of the results. Furthermore, researchers recorded all data on site, thereby mitigating opportunities for participants to misrepresent results.
There are limitations to this study that need to be considered when interpreting the results. The fictitious patient profile reflected a limited set of available medications; results are likely to vary based on the set of interactions evaluated. In addition, using a single patient profile may have caused many pharmacy software systems to generate additional DDI alerts (eg, amiodarone and clarithromycin) as well as alerts for therapeutic duplication (eg, pravastatin and simvastatin). However, these alerts for non-targeted drug combinations were not documented or analyzed. Some variability in the systems' performance may be due, in part, to software updates for clinical evidence of interaction potential that may have occurred during the data-collection period (almost 1 year). For example, pharmacies whose site visits occurred earlier in the data collection period may have been using an older software version than those pharmacies visited in the latter part of the study. Updated software may account for some of the differences in the number of DDIs detected by the various software systems over the study period and, in particular, between the early and later site visits.
The generalizability of the results may be limited because a non-random process was used to recruit pharmacies. All participating pharmacies were affiliated with the university and located within the state of Arizona. Computer software systems installed in other pharmacies may not be comparable; however, many of the pharmacies evaluated were national retail chain pharmacies. In addition, future studies should include objective verification of the pharmacy software vendor, ascertainment of knowledge base vendor used by the pharmacy software, and the date of the last update.