PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jamiaJAMIA - The Journal of the American Medical Informatics AssociationInstructions for authorsCurrent TOC
 
J Am Med Inform Assoc. Dec 2011; 18(Suppl 1): i62–i72.
Published online Sep 21, 2011. doi:  10.1136/amiajnl-2011-000362
PMCID: PMC3241174
Focus on clinical care and patient safety
Development and preliminary evidence for the validity of an instrument assessing implementation of human-factors principles in medication-related decision-support systems—I-MeDeSA
Marianne Zachariah,1 Shobha Phansalkar,corresponding author1,2,3 Hanna M Seidling,4,5 Pamela M Neri,1 Kathrin M Cresswell,6 Jon Duke,7 Meryl Bloomrosen,8 Lynn A Volk,1 and David W Bates1,2,3
1Partners HealthCare System, Wellesley, Massachusetts, USA
2Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
3Harvard Medical School, Boston, Massachusetts, USA
4Department of Clinical Pharmacology and Pharmacoepidemiology, University of Heidelberg, Heidelberg, Germany
5Division of Medical Information Sciences, University Hospitals of Geneva, Geneva, Switzerland
6eHealth Research Group, Centre for Population Health Sciences, University of Edinburgh, Edinburgh, UK
7Regenstrief Institute, Indianapolis, Indiana, USA
8The American Medical Informatics Association, Bethesda, Maryland, USA
corresponding authorCorresponding author.
Correspondence to Dr Shobha Phansalkar, Clinical Informatics Research and Development (CIRD), Partners Healthcare System, 93 Worcester Street, 2nd Floor, Wellesley, MA 02481, USA; sphansalkar/at/partners.org
Received May 9, 2011; Accepted August 22, 2011.
Background
Medication-related decision support can reduce the frequency of preventable adverse drug events. However, the design of current medication alerts often results in alert fatigue and high over-ride rates, thus reducing any potential benefits.
Methods
The authors previously reviewed human-factors principles for relevance to medication-related decision support alerts. In this study, instrument items were developed for assessing the appropriate implementation of these human-factors principles in drug–drug interaction (DDI) alerts. User feedback regarding nine electronic medical records was considered during the development process. Content validity, construct validity through correlation analysis, and inter-rater reliability were assessed.
Results
The final version of the instrument included 26 items associated with nine human-factors principles. Content validation on three systems resulted in the addition of one principle (Corrective Actions) to the instrument and the elimination of eight items. Additionally, the wording of eight items was altered. Correlation analysis suggests a direct relationship between system age and performance of DDI alerts (p=0.0016). Inter-rater reliability indicated substantial agreement between raters (κ=0.764).
Conclusion
The authors developed and gathered preliminary evidence for the validity of an instrument that measures the appropriate use of human-factors principles in the design and display of DDI alerts. Designers of DDI alerts may use the instrument to improve usability and increase user acceptance of medication alerts, and organizations selecting an electronic medical record may find the instrument helpful in meeting their clinicians' usability needs.
Keywords: Human factors, medication-related decision support, clinical decision support, medication alerts, name, visualization of data and knowledge, designing usable (responsive) resources and systems, developing/using clinical decision support (other than diagnostic) and guideline systems, human–computer interaction and human-centered computing, policy, health IT, innovation, patient safety, decision support, data exchange
Clinical decision support (CDS) can assist providers in effectively managing patient care. Kuperman et al describe medication-related decision support as a type of CDS which offers clinicians guidance for appropriate prescribing in several areas including: drug–allergy checking; dosing guidance; formulary decision support; duplicate therapy checking; drug–drug interaction (DDI) checking; medication-associated laboratory testing; checking of drug–disease interactions and contraindications; and advanced drug–pregnancy alerting.1 Such guidance can be provided to users through either interruptive alerts or non-interruptive reminders.
When implemented appropriately and accepted by users for prescribing guidance, medication-related decision support can reduce adverse-drug-event (ADE) rates.2–5 Bates et al found that the rate of preventable ADEs decreased from 2.9/1000 patient-days in the absence of medication-related decision support and computerized physician order entry to 1.1/1000 patient-days in the presence of medication-related decision support used in conjunction with computerized physician order entry.6 In a study by Weingart et al, an expert panel was convened to evaluate the extent to which patient harm was avoided through the use of medication-related decision support and DDI alerts. Of the 402 avoided ADEs, 12.2% were serious, 31.1% were significant, and 56.7% were minor.5 While these results support the positive effects of medication-related decision support on patient safety, it has yet to realize its full potential. This is mainly due to the poor acceptance of warnings among users, as 50% to 95% of drug safety warnings are over-ridden.7
Several approaches have been applied to increase alert acceptance. As preliminary steps, van der Sijs et al suggest that the number of inappropriate alerts should be reduced and that alerts should be selectively delivered to the appropriate provider. Subsequent steps should address the sensitivity, usefulness, and usability of alerts.7 Shah et al modified a commercial knowledge base to reflect only the most clinically relevant and up-to-date knowledge responsible for triggering DDI alerts, and they decreased the number of interruptive alerts by 71%. Excluding alerts of the highest severity that were implemented as hard-stops, the remaining interruptive alerts resulted in the cancelation (19%) and modification (48%) of orders, indicating a 67% alert acceptance rate.8 Modifying or canceling the intended order is defined as an acceptance of alert logic, while continuing with an order unchanged after receiving an alert is defined as an over-riding of the alert. In addition to the above-mentioned methods for decreasing alert fatigue, usability and human-factors concepts that facilitate the appropriate use of visual and text-based information within alerts should be utilized for efficient cognitive processing by the user.9
While human factors have been well studied in the general safety literature, few studies have addressed human factors and the design of clinical information systems empirically. In a viewpoint paper, Karsh et al stressed the importance of drawing on human-factors engineering to improve the usability of health information technology, calling for such research in the domain of patient safety.9 Since information presentation affects user behavior and decision-making, information architecture and graphic interface design demand careful consideration. Design details such as navigation from one window to the next, information placement, font size, information similarity, and perceived credibility can significantly affect the user's actions. Saleem et al affirm that the user interface stands between clinical information systems and the expected outcome of enhanced patient safety.10 Improving the design of the human–computer interface through the application of human-factors engineering has the potential to enable users to optimally utilize these systems.
The current literature provides little in the way of tools for assessing the compliance of medication alert design interfaces with commonly employed human-factors principles. In a recent study, Phansalkar et al identified 11 key human-factors principles that should be considered when designing medication-related decision support alerts.11 In the current study, we combined these to build the Instrument for Evaluating Human-Factors Principles in Medication-Related Decision Support Alerts (I-MeDeSA) in order to assess the extent to which a given interface design incorporates these human-factors principles. We then gathered preliminary evidence for the validity of the instrument by testing it on three medication-related decision-support systems. This paper discusses the results of the development and preliminary validation of I-MeDeSA, an instrument created for system developers to improve alert design and to inform an organization's purchase of a usable electronic medical record (EMR) system.
Phansalkar et al identified key human-factors principles for the design and implementation of medication-related decision support alerts.11 Eleven principles were determined to be important: Alarm Philosophy, False Alarms, Placement, Visibility, Prioritization, Color, Learnability and Confusability, Textual Information, Habituation, Mental Models, and Proximity of Task Components Being Displayed (table 1).
Table 1
Table 1
Summary of the 11 key human-factors principles for use in medication-related decision-support alerts, compiled by Phansalkar et al11
Literature review
A systematic review of the medical informatics literature was conducted to identify any instruments available for quantitatively measuring compliance of clinical information systems with human-factors principles. The PubMed/Medline database was searched for articles published from July 2000 to July 2010. The query consisted of the following criteria: terms related to clinical information systems (medical records system, medical order entry systems, computerized physician order entry) along with terms related to human-factors principles (human-factors engineering, human–interface interactions, usability), and terms related to instruments (survey, survey method, questionnaire). The search was restricted to Meta-Analyses, Practice Guidelines, Randomized-Controlled Trials, or Reviews that were published with abstracts and in English. The search strategy is outlined in figure 1.
Figure 1
Figure 1
Search criteria and results. (A) (Ambulatory Care Information Systems OR Computerized Order Entry OR Computerized Physician Order Entry OR Computerized Prescriber Order Entry OR Computerized Provider Order Entry OR Decision Support Systems, Clinical OR (more ...)
The query combining all three search criteria described above returned 159 articles. All abstracts were reviewed by one reviewer (MZ), and 29 articles were selected based on their relevance to the employment of human-factors principles in the design of clinical information systems. One article from a bibliographical search of the selected 29 articles was added. Two reviewers (MZ, SP) independently evaluated the 30 articles to assess whether any of the studies employed an instrument for measuring compliance with human-factors principles in clinical information systems design. Reviewers were unable to identify the existence of such an instrument in the existing literature.
Instrument development
In this study, we developed an instrument to measure the implementation of human-factors principles in the design of medication-related decision support relating to DDI alerts. DDI alerts where chosen to be the focus, since they are one of the most commonly triggered alerts in medication-related decision-support systems. For this purpose, quantifiable human-factors principles as outlined in the previous study by Phansalkar et al were compiled, and we created several items for use in assessing these principles. This preliminary version of the instrument was improved and adapted iteratively. First, the instrument was distributed for content validation to three reviewers (KC, PN, and HS) with expertise in the use of human-factors principles for the evaluation of clinical information systems. These experts were asked to review the items for relevance and clarity. Items were eliminated or modified based on feedback received from the three reviewers.
To assess the generalizability of the items to DDI alerts of various designs across different EMR systems, three reviewers (JD, HS, MZ) tested the instrument on DDI alerts generated in EMRs used at their own organizations (ie, (i) Gopher 5.25, an ambulatory EMR used at Regenstrief Institute, Indianapolis, Indiana; (ii) AiDKlinik 1.9.2, a prescribing platform integrated into an ambulatory EMR used at Heidelberg University Hospital, Heidelberg, Germany; and (iii) the Longitudinal Medical Record (LMR) 8.2, an ambulatory EMR used at Partners Healthcare, Boston, Massachusetts). While I-MeDeSA was tested exclusively on outpatient EMRs, the instrument is also applicable to inpatient EMRs, as indicated by a related study that uses the instrument to evaluate the usability of DDI alerts from various inpatient and outpatient EMRs used across the USA.16 From this study, the authors have observed that the design of inpatient versus outpatient DDI alerts for vendor or home-grown products is similar, and the inclusion of DDI alerts from inpatient decision-support systems would likely result in similar findings.
Reviewers evaluated each unique design of DDI alerts displayed in their organizations' systems. To facilitate the evaluation, reviewers were given exemplar DDIs that were expected to trigger alerts of various levels of severity. They were selected from the drug knowledge database at Partners Healthcare in fall, 2010. The specific DDIs that were provided to the reviewers to examine in their respective systems are presented in table 2.
Table 2
Table 2
Suggested drug interactions provided to reviews
If these DDIs failed to trigger an alert, reviewers were instructed to use alternate DDIs relevant to their systems in order to capture all severity levels of alerts. Both AiDKlinik and Gopher required alternate and unique DDIs to trigger alerts of various levels available in each system. Feedback from this assessment was used to modify the items so they would be applicable to various DDI alert designs across EMRs.
Next, we assessed inter-rater reliability and construct validity through correlation analysis. Inter-rater reliability was assessed using Cohen's κ. Two reviewers (JD, HS) independently evaluated the same DDI alerts from Gopher 5.25. A preliminary evaluation for construct validity was performed by correlating the performance of DDI alerts on I-MeDeSA to the age of the medication decision-support system from which the alerts came. Given that the usability of medication decision support alerts has gained attention only in recent years, we hypothesized that older systems would perform more poorly than newer systems when DDI alerts are evaluated by I-MeDeSA. DDI alerts from LMR (2010) and Gopher (1993) were compared in order to establish a correlation between system age and performance on I-MeDeSA. The scores used for this analysis were the final scores assigned to these systems, as agreed upon by two reviewers (HS, MZ).
The final version of I-MeDeSA incorporated changes made as a result of the above methods for analysis, as well as results from a related study that collected user feedback about various EMRs employed by multiple institutions across the USA.17 User feedback was used to provide recommendations for the preferred design of DDI alerts and medication-related decision-support systems more generally.
Employing the I-MeDeSA instrument
The instrument has been created for assessment of medication-related decision support alert design. Some medication-related decision-support systems stratify medication alerts based on the severity of patient harm.18 If the system possesses alerts of various severities, and the designs of these stratified alerts are unique, each design would require evaluation, and scores within each item would be averaged. If the design of stratified alerts is the same, or alerts are not stratified, then the user would need to assess only a single design. Each item scores on a binary basis, where a score of 0 is equivalent to answering ‘No’ for the absence of the characteristic, and a score of 1 is equivalent to answering ‘Yes’ for its presence. Pilot testing showed that the instrument required approximately 20 min to complete per system.
Eight of the 11 human-factors principles outlined in the Phansalkar et al review were selected for incorporation into the instrument because they were quantifiable. Three principles (False Alarms, Habituation, and Mental Models) were excluded due to their less quantifiable nature given typical sources of data available, even though these principles may be important for determining user acceptance. Therefore, the initial instrument contained eight principles and 34 allocated items. During the development of I-MeDeSA, the instrument was assessed for content validity and inter-rater reliability, and a preliminary evaluation of construct validity was performed. In testing I-MeDeSA on DDI alerts from three unique EMRs, one new human-factors principle was created (Corrective Actions), and eight items across multiple principles were eliminated, resulting in nine principles and 26 items. The formal instrument is available upon request. Detailed information on validation is described below.
Validation
Upon completing the original draft of I-MeDeSA, the instrument was distributed to reviewers (KC, PN, and HS) with expertise in the use of human factors to assess its content validity. After receiving their feedback, changes were made to each principle. Content validation is summarized in table 3. We have also provided a detailed explanation of the Corrective Actions principle, since it has not been previously described in the Phansalkar et al study.11
Table 3
Table 3
Summary of content validation and the final list of items contained within Instrument for Evaluating Human-Factors Principles in Medication-Related Decision Support Alerts
Corrective actions
Use of this principle serves to streamline the user's workflow in responding to alerts. The goal is to minimize the number of steps the user must take in dealing with an alert, as well as to efficiently capture the user's response to the alert, or intended action. Typically, a user will receive a DDI alert after attempting to place an order for a drug that interacts with an existing drug. As a next step, the user would process the nature of the interaction and weigh the risks and benefits to the patient of continuing with the order or canceling it. At this point, the user usually acknowledges having gone through the assessment process, indicates an acceptance or rejection of the alert, and proceeds with the intended action. The actions may be: to continue with the order unadjusted, to continue with the order but adjust the dose of one of the drugs, to continue with the order but switch to a different drug within the same class, to discontinue an existing drug and place the order for the new drug, or to cancel the order for the new drug and keep the existing drug. Systems considering human-factors principles and usability will provide the user with an efficient means for carrying out any of the steps described for dealing with an alert. The items within the Corrective Actions principle capture the use of such efficient methods. The first item assesses the use of a simplistic corrective action. Through one click located within the alert, the user is able to simultaneously acknowledge having seen an alert, as well as convey an acceptance or rejection of the alert with respect to their patient. Given the volume of alerts received by clinicians each day and feedback received from users in a related study,17 the employment of a simplistic method for responding to the alert will increase the usability and efficiency of alerts.
More sophisticated systems will provide intelligent corrective actions within the alert. The second item within this principle relates to the interruptive nature of alerts and the fallibility of humans. Intelligent corrective actions address these issues by assisting the user in efficiently accepting or rejecting an alert and carrying out recommended actions. For example, an alert may read, ‘Reduce the warfarin dose by 33–50% and follow patient closely.’ An alert using intelligent corrective actions would contain a response such as, ‘Continue with warfarin order AND reduce dose by 33–50%.’ Selecting this option would simultaneously accept the alert and direct the user back to the medication ordering window where the user can adjust the dose appropriately. In addition to automatically linking the user to the ordering screen for adjusting a dose, the system would be capable of automatically discontinuing an existing drug directly from the alert, as well as replacing an order for a different drug within the same class (and medication decision support would re-evaluate interactions with the newly selected drug). Given that clinicians are already juggling the care of multiple patients, and given the nature of alerts to interrupt the user's workflow, such intelligent corrective actions are important. The user would not have to remember to go back into the system, after dealing with all alerts, to follow through with intended actions. Finally, there would ideally exist a fail-safe mechanism in which the system is capable of monitoring whether or not the user followed through with the intended action, the third item in this principle. If the user did not complete intended actions, the system would notify the user.
Instrument applicability
To ensure the applicability of the instrument to a variety of DDI alert designs, the modified instrument was distributed to three reviewers (JD, HS, and MZ). One reviewer had expertise in the field of medical informatics; another in the use of human-factors principles; and the third with experience in the usability of CDS systems. Reviewers tested the instrument on the DDI alerts available in the EMRs used at their own institutions. The design of DDI alerts was different for each system. See table 4 for a brief description of DDI alerts from each system.
Table 4
Table 4
Description of drug–drug interaction alerts from three electronic medical records
After reviewing the evaluations and associated feedback from reviewers, the following guidelines for scoring the instrument items were established to maximize applicability to different systems:
  • Critical Information Layout (Item 2iv): The primary focus of this item is on the layout of the information presented within the alert. Critical information provided in a DDI alert must include: (1) the interacting drugs, (2) the risk to the patient, and (3) the recommended action. Each component should be labeled appropriately and located on a few lines each, at the first point of alerting. The information should not be presented together in one paragraph. If any of the critical information components require the user to click through additional screens or to sort through a drug monograph to access this information, this critical information layout item will be scored as zero. Systems will score higher if they use active alerts that force users to see alert information, rather than passive alerts where the user must seek out alert information through multiple clicks, effectively allowing the user to avoid the information entirely.
  • Visibility (Item 3iii): Any mixture of upper and lower case shall be scored as 1; exclusive use of either upper or lower case shall be scored as 0.
  • Distinguishing Alert Features (Item 6i): To achieve a score of 1, alerts must possess unique visual characteristics, such as color, shapes, and font. The use of a signal word to identify the severity of the alert is not considered to be a visual characteristic.
  • Text-Based Information (Items 7i–7iv): These items evaluate the textual information presented at the first point of alerting. Any information contained within expanded details or a monograph shall be disregarded. If the user is initially presented with only one component of an alert, for example, only the type or severity, and they must click through additional windows or read through a monograph to acquire the required information, the corresponding items shall be scored as 0.
  • Corrective Actions (Item 9ia): If there is at least one corrective action presented in the alert, such as ‘discontinue existing drug,’ this item shall receive a score of 1.
The patient safety literature has indicated the importance of providing users with information on how alert severities are assessed and the general ‘alert philosophy’ of the system. The absence of such information existed across systems, creating a lack of transparency for users to assess severity ratings, the number of levels of alerts, and general information. Users have expressed the desire for such a catalog in order to increase DDI alert usability.17 Therefore, authors felt the presence of item 1i was an important criterion to include.
Inter-rater reliability
Two reviewers (JD, HS) independently applied I-MeDeSA to DDI alerts of Gopher 5.25. Their ratings were compared, and Cohen's κ was calculated (κ=0.764, 95% CI 0.519 to 0.999). The calculated κ value indicates substantial agreement between raters.
Correlation analysis
Preliminary evidence for construct validity was gathered through a correlation analysis. Given that the usability of medication-related decision support alerts has gained attention only in recent years, we hypothesized that older systems would perform more poorly than newer systems when DDI alerts are evaluated by I-MeDeSA. DDI alerts from LMR (2010) and Gopher (1993) were compared in order to establish a correlation between system age and performance on I-MeDeSA. Analysis suggests a direct correlation between system age and performance of DDI alerts on I-MeDeSA. LMR (2010) performed better than Gopher (1993) (p=0.0016). However, this is only preliminary evidence for known group difference. A larger sample size is necessary to determine a true correlation between system age and performance of DDI alerts on I-MeDeSA.
Although there has been discussion surrounding the importance of human factors in medication-related decision support, relatively little empirical research has been done. The human-factors studies in this domain have largely been qualitative, and instruments to assess how human-factors principles have been incorporated have not been widely available. Therefore, we developed an instrument that evaluates the usability of DDI alerts within medication-related decision-support systems, combining previous research on human-factors principles with current user feedback, thus making I-MeDeSA a novel tool.
Eight of the nine human-factors principles (Alarm Philosophy, Placement, Visibility, Prioritization, Color, Learnability and Confusability, Text-Based Information, and Proximity of Task Components Being Displayed) selected for use in I-MeDeSA were originally reported on in a study by Phansalkar et al,11 which compiled information on human-factors principles and alert design and display from a wide variety of robust sources. A ninth principle was created from a combination of other items and user feedback collected in a related study.17 Twenty-six items were created from the nine selected human-factors principles, and preliminary evidence for the validity of the instrument was gathered.
While the assessment of content validity and inter-rater reliability were relatively straightforward, construct validation proved to be quite challenging, given the limited number of decision-support systems we could assess. Ideally, a larger sample size of EMRs would allow performance of analyses to assess the construct validity (eg, factor analysis) and internal consistency reliability. However, factor analysis requires that we test I-MeDeSA on two to 10 times as many systems as there are items, that is, 52–260 decision-support systems. Additionally, Cronbach's α should have at least 20 observations per item in order to produce a reliable assessment of internal consistency. As part of a larger study, we have assessed the use of human-factors principles in 14 medication decision-support systems (both inpatient and outpatient) used throughout the USA.16 We felt it would be inappropriate to test I-MeDeSA on all 14 systems. Rather, we selected approximately 20% of the 14 systems as representative examples to test the instrument.
To offer additional support for the validity of I-MeDeSA, Seidling et al confirmed that the display of an alert and the textual information within are related to the user's acceptance of an alert.19 Again, alert acceptance is defined as the modification or cancelation of an order upon receiving an alert. The alert display had an OR of 4.23 (95% CI 1.78 to 10.08, p=0.0011) for canceled orders versus over-ridden alerts, and an OR of 4.78 (95% CI 4.63 to 4.93, p<0.0001) for modified orders versus over-ridden alerts. The content of all nine items associated with alert display can be found within I-MeDeSA. Textual information had an OR of 1.22 (95% CI 1.15 to 1.30, p<0.0001) for modified orders versus over-ridden alerts.
Our hope is that designers with little or no knowledge of human-factors principles may use this instrument during early stages of development to help construct highly usable DDI alerts. We believe the instrument will be especially useful to designers with little access to users who can test the product for usability. Additionally, I-MeDeSA may assist institutions and individual clinical practices in selecting a suitable medication-related decision support vendor product.
I-MeDeSA has been tested on DDI alerts only and has not been tested on other types of medication alerts, such as drug-allergy alerts and drug-duplicate alerts. To continue increasing the usability of medication-related decision-support systems and enhancing patient safety, similar work should be carried out in other areas.
I-MeDeSA assesses the usability of DDI alerts used by outpatient EMRs with medication-related decision-support functionality. While only three systems were used to test the generalizability of I-MeDeSA, these three systems presented a variety of DDI alert designs.
The scope of the study limited our evaluation to outpatient EMRs, but we do think these findings are relevant to EMRs used in inpatient care, as well as to other types of medication-related decision support beyond DDIs. Future research will focus on validating this instrument on EMRs used in different settings, including inpatient and personal health records and other types of medication decision-support alerts. Expansion of the number of EMRs evaluated will also allow us to perform more stringent analyses to support factor analysis or to obtain a reliable assessment of internal consistency of the instrument.
Conclusions
We developed I-MeDeSA to assess the use of human-factors principles in the design of DDI alerts. In the future, further validation should be performed and I-MeDeSA's use should be evaluated for other types of decision support. We hope that implementation of I-MeDeSA will assist in the system-development process and increase the usability and effectiveness of DDI alerts.
Acknowledgments
The authors would like to thank D Seger and J Warvel, for their assistance in supporting the evaluation of selected EMRs, and S Lipsitz and EF Cook, for their invaluable advice and support on this study.
Footnotes
Funding: This study was sponsored by the Center for Education and Research on Therapeutics on Health Information Technology grant (Principal Investigator: DWB), Grant # U18HS016970 from the Agency for Healthcare Research and Quality.
Competing interests: None.
Provenance and peer review: Not commissioned; externally peer reviewed.
1. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007;14:29–40. [PMC free article] [PubMed]
2. Kuperman GJ, Teich JM, Gandhi TK, et al. Patient safety and computerized medication ordering at Brigham and Women's Hospital. Jt Comm J Qual Improv 2001;27:509–21. [PubMed]
3. McCoy AB, Waitman LR, Gadd CS, et al. A computerized provider order entry intervention for medication safety during acute kidney injury: a quality improvement report. Am J Kidney Dis 2010;56:832–41. [PMC free article] [PubMed]
4. Rommers MK, Zegers MH, De Clercq PA, et al. Development of a computerised alert system, ADEAS, to identify patients at risk for an adverse drug event. Qual Saf Health Care 2010;19:1–6. [PubMed]
5. Weingart SN, Simchowitz B, Padolsky H, et al. An empirical model to estimate the potential impact of medication safety alerts on patient safety, health care utilization, and cost in ambulatory care. Arch Intern Med 2009;169:1465–73. [PubMed]
6. Bates DW, Teich JM, Lee J, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 1999;6:313–21. [PMC free article] [PubMed]
7. van der Sijs H, Aarts J, Vulto A, et al. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006;13:138–47. [PMC free article] [PubMed]
8. Shah NR, Seger AC, Seger DL, et al. Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc 2006;13:5–11. [PMC free article] [PubMed]
9. Karsh BT, Weinger MB, Abbott PA, et al. Health information technology: fallacies and sober realities. J Am Med Inform Assoc 2010;17:617–23. [PMC free article] [PubMed]
10. Saleem JJ, Russ AL, Sanderson P, et al. Current challenges and opportunities for better integration of human factors research with development of clinical information systems. Yearb Med Inform 2009:48–58. [PubMed]
11. Phansalkar S, Edworthy J, Hellier E, et al. A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc 2010;17:493–501. [PMC free article] [PubMed]
12. Bliss JP, Gilson RD, Deaton JE. Human probability matching behaviour in response to alarms of varying reliability. Ergonomics 1995;38:2300–12. [PubMed]
13. Bliss JP, Dunn MC. Behavioural implications of alarm mistrust as a function of task workload. Ergonomics 2000;43:1283–300. [PubMed]
14. Wickens C, Carswell C. The proximity compatibility principle: its psychological foundation and relevance to display design. Hum Factors Aero Saf 1995;37:473–94.
15. Riley M, Cochran D, Ballard J. Investigation into the preferred shapes for warning labels. Hum Factors Aero Saf 1982;24:737–42.
16. Zachariah M, Phansalkar S, Seidling HM, et al. Evaluation of medication alerts for compliance with human factors principles: a multi-center study. AMIA Annu Symp Proc 2011.
17. Phansalkar S, Zachariah M, Cresswell K, et al. A qualitative approach to assessing medication-related alerting in electronic medical records. Society of General Internal Medicine Annual Meeting, 2011.
18. Paterno MD, Maviglia SM, Gorman PN, et al. Tiering drug–drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc 2009;16:40–6. [PMC free article] [PubMed]
19. Seidling HM, Phansalkar S, Seger DL, et al. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc 2011;18:479–84. [PMC free article] [PubMed]
Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of
American Medical Informatics Association