Search tips
Search criteria 


Logo of jamiaAlertsAuthor InstructionsSubmitAboutJAMIA - The Journal of the American Medical Informatics Association
J Am Med Inform Assoc. 2011 May-Jun; 18(3): 232–242.
Published online 2011 March 17. doi:  10.1136/amiajnl-2011-000113
PMCID: PMC3078666

Development and evaluation of a comprehensive clinical decision support taxonomy: comparison of front-end tools in commercial and internally developed electronic health record systems



Clinical decision support (CDS) is a valuable tool for improving healthcare quality and lowering costs. However, there is no comprehensive taxonomy of types of CDS and there has been limited research on the availability of various CDS tools across current electronic health record (EHR) systems.


To develop and validate a taxonomy of front-end CDS tools and to assess support for these tools in major commercial and internally developed EHRs.

Study design and methods

We used a modified Delphi approach with a panel of 11 decision support experts to develop a taxonomy of 53 front-end CDS tools. Based on this taxonomy, a survey on CDS tools was sent to a purposive sample of commercial EHR vendors (n=9) and leading healthcare institutions with internally developed state-of-the-art EHRs (n=4).


Responses were received from all healthcare institutions and 7 of 9 EHR vendors (response rate: 85%). All 53 types of CDS tools identified in the taxonomy were found in at least one surveyed EHR system, but only 8 functions were present in all EHRs. Medication dosing support and order facilitators were the most commonly available classes of decision support, while expert systems (eg, diagnostic decision support, ventilator management suggestions) were the least common.


We developed and validated a comprehensive taxonomy of front-end CDS tools. A subsequent survey of commercial EHR vendors and leading healthcare institutions revealed a small core set of common CDS tools, but identified significant variability in the remainder of clinical decision support content.

Keywords: Developing/using computerized provider order entry, knowledge representations, classical experimental and quasi-experimental study methods (lab and field), designing usable (responsive) resources and systems, statistical analysis of large datasets, discovery, text and data mining methods, automated learning, human-computer interaction and human-centered computing, qualitative/ethnographic field study, clinical decision support, manning maddux, decision support, biomedical informatics, developing and refining EHR data standards (including image standards), controlled terminologies and vocabularies, measuring/improving patient safety and reducing medical errors, machine learning, electronic health records, meaningful use


Much of the potential value of electronic health record (EHR) systems comes from clinical decision support (CDS) tools that can help make care safer, more efficient, and more cost effective.1 2 CDS systems are designed to improve clinician decision-making at the point of care. Examples include health maintenance reminders,3 drug–drug interaction checking,4 dose adjustment,5 and order sets.6 When well designed and implemented, these interventions can help improve care quality and reduce medical errors.1 2 7–10

Although extensive research on ‘internally developed’ CDS has demonstrated the power of CDS to accomplish these goals, much of this research comes from four sites with internally developed EHRs.11 For the most part, the decision support in commercial EHR systems remains understudied. In addition, commercial EHRs have previously been found to be variable in their clinical decision support capabilities.12 This is concerning given that most hospitals and physician practices are likely to purchase a commercial EHR rather than invest the substantial time and resources required to develop a custom EHR system.

Federal meaningful use requirements mandate that hospitals and eligible providers utilize certified EHRs that implement clinical decision support in order to qualify for federal incentives.13 Specifically, the stage 1 objective for achieving meaningful use, as defined by the Centers for Medicare and Medicaid Services, is to “implement one clinical decision support rule relevant to specialty or high clinical priority along with the ability to track compliance with the rule.”14 This benchmark is expected to expand dramatically in stage 2 (2013) and stage 3 (2015) requirements as EHR use becomes more widespread.

Given the limited availability of CDS in routine clinical use,15 the impending deadlines for increased CDS use outlined in ‘meaningful use’ guidelines, and the fact that many institutions will likely purchase commercially developed CDS systems, it is imperative to develop a nuanced understanding of existing CDS tools and to determine the extent to which they have been incorporated into currently available commercially developed EHR systems. The goal of this project was to develop a comprehensive taxonomy of front-end CDS tools. We used this taxonomy to create a survey to study the availability of these CDS tools as designed at a purposive sample of leading healthcare institutions with internally developed EHRs and in commercially available EHR products.


Front-end tools versus back-end system capabilities

The front-end CDS tools available to EHR users depend on the EHR's available back-end system capabilities. We define back-end system capabilities as discrete system capabilities such as alert triggers, available data input elements, and end-user notification methods,16 while front-end CDS tools are the intervention types available to end-users created using specific clinical knowledge bases and application logic. Consider, for example, the domain of medication-related decision support. Examples of front-end CDS tools might include drug–drug interaction checking, weight based dosing, or renal dose adjustment. Back-end system capabilities that would support such tools might include a trigger in the information system that fires when a new medication is ordered, the ability to access the medication being ordered, a patient's current medications, weight and glomerular filtration rate, the ability to do mathematical calculations, and the ability to display an alert with actionable choices to the end-user.

As a specific example, consider the case of weight-based dosing, a type of front-end CDS tool, as defined above, which allows providers to calculate appropriate drug dosages based on patient weight. In order to implement this front-end tool, several back-end system capabilities must be present, including triggers, input data elements, interventions, and offered choices.16 First, a trigger (in this case, the ordering of a medication) is necessary. After the tool is initiated by the trigger, the information system retrieves necessary input data elements including patient weight, medication, and weight-based dosage guideline information. An intervention is then displayed in the form of text guidelines, a weight-based dosage calculator, or an automated dose recommendation. Finally, depending on the system, the user may be offered the choice to adjust the dose as needed and place the order or may be limited to certain default dose choices. Thus, a wide range of back-end system capabilities may act to support a unique front-end tool.

Review of taxonomies

A number of taxonomies have been proposed to describe CDS systems; these classification systems are summarized in table 1.1 2 16–21 Most, with the exception of those of Wang et al and Garg et al, describe the back-end system capabilities of CDS systems (eg, triggers, data input elements) rather than front-end tools.

Table 1
Clinical decision support (CDS) taxonomies

Previously, we developed a taxonomy of clinical decision support that could be used to categorize discrete back-end system capabilities of clinical information systems and CDS systems.16 In a separate study, we examined the availability of these capabilities within several major commercial EHR systems.12 This study was limited to the back-end system capabilities present in the information system and explicitly excluded the front-end tools available for use by providers. We found that the back-end system capabilities of nine commercial systems was highly variable—the most comprehensive system had 41 of 42, while the least comprehensive had only 24 of 42 back-end system capabilities.

Although we believe this characterization was useful, we have found that, in practice, many healthcare organizations do not directly work with the back-end system capabilities of their EHR to implement CDS de novo, but rather use front-end CDS tools and content which they purchase ‘off-the-shelf’ from their EHR vendor or a clinical decision support content vendor. Therefore, we expanded upon our previous research on back-end system capabilities with the goal of fully characterizing available front-end decision support tools across a wide range of clinical information systems, including both commercially available and internally developed EHR systems.

Design versus implementation

In addition to assessing both back-end CDS system capabilities and front-end CDS tools, it is also valuable to differentiate between EHR system features as designed and the available tools as implemented or used. Although a particular type of clinical decision support may be possible in a given system, whether it is actually available to end-users can vary widely depending on how the system is implemented. Organizations can decide not to buy certain CDS modules from their EHR vendor if they can be optionally purchased elsewhere, or they can turn off what does come with their system purchase. In addition, research has shown that the same commercial systems can be used with variable results. For example, the Leapfrog group conducted a test of computerized physician order entry systems (CPOE), as implemented, and found that each commercial system evaluated failed the test as implemented in at least one institution, and passed in at least one other, a testament to the variability of the configuration and implementation process.22

A robust understanding of CDS systems on both the back-end/front-end and design/implementation dimensions is thus valuable for future research and development (table 2).

Table 2
Taxonomic assessment of decision support content and function as designed and as implemented

Current systems have yet to be fully characterized along both of these dimensions. We first assessed back-end capabilities as implemented within one internally developed EHR to develop the taxonomy of back-end capabilities required to create useful front-end tools.16 A subsequent study on back-end system capabilities as designed assessed their availability across multiple commercially available EHR systems.12 In addition, Classen et al investigated front-end tools as implemented at various sites.23 The area that remains uninvestigated is the CDS front-end-as designed. Thus, the goal of the current study is to characterize the fourth and final quadrant: front-end-tools-as designed.

As reflected in table 1, although a variety of CDS taxonomies exist, rigorous taxonomies of front-end tools are lacking. Therefore, we began our project by developing a taxonomy of front-end CDS tools using a Delphi method, with a large expert panel. Our goal in developing the taxonomy was to assess the CDS tools available in various systems as designed. We then developed and administered a survey to two groups: commercial EHR vendors and ‘internal’ EHR developers. For the purposes of this paper, EHRs are referred to as either ‘commercial,’ created by a vendor and sold to a hospital or other healthcare organization, or ‘internally developed,’ built by a hospital or other healthcare organization for their own use.


Clinical decision support taxonomy

A preliminary list of 46 CDS tools was developed by the authors based on examination of systematic literature reviews of clinical decision support, extensive experience in the field of CDS, and previously conducted qualitative research.12 16 24 25 The authors, through their research group, then organized and facilitated an in-person conference which included a group of 11 national experts in healthcare IT and clinical decision support in addition to the researchers themselves (supplementary online appendix A includes a complete list of participants and organizing members of the multidisciplinary Provider Order Entry Team—POET).

The meeting took place over 2 days outside of Portland, Oregon in the spring of 2009. The complete list of 46 CDS tools was debated among all participants with meeting facilitation provided by POET team members. On the basis of this debate, several types of clinical decision support were added and some were modified or removed. In addition, the CDS types were divided into six categories based on this discussion (and in part on the taxa laid out in Osheroff et al8 and other clinical decision support taxonomies): medication dosing support, order facilitators, point-of-care alerts/reminders, relevant information display, expert systems, and workflow support. Although based on the assessment of experts at the conference and modifications of existing CDS taxonomies, the six over-arching categories were created primarily for the purpose of organizing and analyzing the CDS survey responses. The final taxonomy contains a list of 53 CDS tools meant to provide a comprehensive framework for describing all front-end tools currently in use. The complete taxonomy, including CDS types and sub-categories, descriptions and examples, is shown on the left-hand side of tables 3–8.


Once the clinical decision support taxonomy had been reviewed and revised by the expert panel, following IRB approval, surveys were sent to a purposive sample of nine major CCHIT-certified commercial EHR vendors providing a broad array of ambulatory and inpatient EHR systems: Eclipsys (recently merged with Allscripts, Chicago, Illinois, USA); NextGen, Horsham, Pennsylvania, USA; e-MDs, Austin, Texas, USA; Epic Systems, Verona, Wisconsin, USA; Cerner, Kansas City, Missouri, USA; GE, Fairfield, Connecticut, USA; Greenway Medical Technologies, Carrollton, Georgia, USA; and SpringCharts, Houston, Texas, USA; and four healthcare institutions: Partners HealthCare, Boston, Massachusetts, USA; the Regenstrief Institute, Indianapolis, Indiana, USA; Intermountain Healthcare, Salt Lake City, Utah, USA; and the national Veterans Health Administration, Washington, DC, USA (see table 9 for locations and other information).

Table 9
Vendors and institutions surveyed

Commercial vendors were selected on the basis of (1) CCHIT certification and (2) EHR products in widespread use at multiple sites. The internally developed EHRs surveyed were in use at healthcare institutions identified by Chaudhry et al as having the largest number of high quality, peer-reviewed articles describing their research and development activities.11 All surveys were conducted via email and were sent to knowledgeable leaders and/or informatics staff within each organization (eg, CMIO, CEO, CMO).

For each type of clinical decision support, respondents were provided with a brief definition and a representative example (identical to the types listed in tables 3–8) and were asked to indicate whether each tool was present (‘Y’) or absent (‘N’) as the system was designed. Respondents were asked whether the current release of their “EMR supports this type of CDS.” Respondents were asked to answer according to the capabilities of the current version of their EHR system only, not on any planned capabilities or theoretical extensions, and were also asked to focus on the capabilities of their systems as designed, rather than as typically implemented (appreciating that some features may be used more than others). Respondents were also given the opportunity to provide comments to clarify each response, and were encouraged to contact the investigators with any questions—several vendors requested meetings to discuss their capabilities or ask questions, and these requests were accommodated.

Data analysis

Results were compiled in Microsoft Excel and analyzed using Excel and SAS. Based on the data collected, various descriptive statistics were recorded. Given our small sample and purposive sampling strategy, it was not possible to infer broad quantitative characteristics of the CDS developers' community at large.


Surveys were sent to nine commercial EHR vendors and four healthcare institutions. We received responses from seven of nine vendors (77%) and four of four institutions (100%) for an overall response rate of 85%. Details about the systems surveyed, including vendor/institution name, location, system name, system version, and CCHIT certification year are presented in table 9. From this point forward, we present anonymized results in accordance with the preference of surveyed vendors and institutions.

The complete results of the survey along each of the 53 types of front-end CDS tools are shown on the right-hand side of tables 3–8 and summarized by category in table 10.

Table 10
Summary of capabilities of commercial and internally developed systems by category

The proportion of available CDS tools in each category for all EHRs ranged from 28.3% to 96.2% (median 60.4%). Eight of the 53 types (15%) of clinical decision support were found to be present in all surveyed systems: default doses/pick lists, medication order sentences, condition-specific and procedure-specific order sets, drug–drug and drug–allergy interaction checking, health maintenance reminders, and clinical documentation (charting) aids. Twelve of the 53 types (23%) of clinical decision support were present in all commercial EHRs and 16 (30%) were present in all internally developed EHRs. All 53 categories of decision support were present in at least one of the 11 systems surveyed. Although no single system was capable of all surveyed types of clinical decision support, two commercial systems and one internally developed system had more than 90% of all surveyed CDS tools.

Overall, certain classes of decision support features, including order facilitators (81.8% availability) and dosing support (80.5%), were more common, with most of these types of decision support present in the majority of systems. Workflow support (68.8%), point-of-care alerts/reminders (65.6%), and relevant information displays (63.6%) were less common but still prevalent in the majority of systems. Finally, expert systems (41.3%), which includes tools such as diagnostic decision support, treatment planning, laboratory data interpretation, and ventilator support, was the least common class of CDS tools available.


Among both internally developed and commercial systems, there was significant variability in the available front-end CDS tools as designed. While more than one system had over 90% of the surveyed CDS tools, others had less than 60% and one commercial system had only 28.3%. Several were present in all 11 systems, while others (including polypharmacy alerts, treatment planning, look-alike/sound-alike medication alerts, diagnostic support, prognostic tools, ventilator support, and free text order parsing) were present in as few as three of the systems surveyed. Not surprisingly, the most common CDS tools were generally the simplest, such as drug–drug interaction checking, while the least common were advanced expert systems such as treatment planning and diagnostic support. In general, ambulatory EHRs had a lower proportion of surveyed CDS functions when compared with inpatient EHRs.

Our findings also show that certain classes of CDS tools are more commonly available. Dosing support (eg, default doses/pick lists) and order facilitators (eg, condition-specific order sets) were the most common classes of CDS tools available while expert systems (eg, ventilator support) was the least common class. The variation in availability of different CDS categories is not surprising given that each requires differing knowledge bases and varying expertise. While all forms necessitate significant investments (both financial and otherwise), vendors and healthcare institutions may preferentially avoid incorporating the most resource-intensive content into their systems.

Overall, the results of our survey indicate that although a diverse range of CDS tools exists in both vendor and internally developed EHR systems, there remains significant room for improvement in making these tools more widely and consistently available. Given that our sample of commercial and internally developed systems represents some of the most advanced and most widely used systems and assesses their optimum CDS capabilities, our results indicate that the general availability of decision support tools remains limited even in the best of cases.

It is important to consider that these results are based on each system as it is designed, not as it is actually implemented and used at real-world sites. The gap between the available tools as a system is designed and how that system is actually implemented and used in clinical practices can be substantial, specifically in the case of commercially developed EHR systems. While vendors may incorporate a certain CDS tool into their system, whether that tool is ultimately available to the end-user is highly dependent on institutional priorities, governance practices, and implementation procedures.71 In this project, we examine the off-the-shelf CDS tools as designed in a purposive sample of leading EHRs. In evaluating a commercial EHR for possible adoption, it is important to consider both the tools that are available as designed or ‘out-of-the-box’ and what tools will actually be implemented based on the priorities and needs of the institution. Each institution, whether developing ‘home-grown’ systems or purchasing one from an outside vendor, needs to consider the specific decision support tools that are right for them and prioritize different types of CDS based on institutional needs.

Consideration of both back-end system capabilities and front-end tools is vitally important for the evaluation and development of EHR systems. Off-the-shelf systems may offer ready-to-use tools but may limit the ability to customize these tools through different combinations of CDS system capabilities. In contrast, a home-grown system with robust CDS system capabilities may offer a great deal of flexibility but may also require a greater investment of time, resources, and expertise to create front-end tools. In general, as long as a system includes enough basic system capabilities, the end-user can create any type of CDS tool. Realistically, however, the end-user may lack the time, resources, expertise, or creativity to create tools by combining available system capabilities.

There are a variety of ways to promote broader availability of CDS tools for the system end-user. One solution is simply for vendors and institutional developers to expand the variety of CDS tools available in their systems, which we hope they will continue to do in light of these results. However, given that this might not be feasible in all cases, additional means are necessary for increasing the availability of a range of CDS tools. One such solution is the use of external CDS tools (including web or software-based tools) that can add third-party content by ‘talking’ to the EHR via an application programming interface. Another option is the use of general purpose rule engines, which allow end-users to more easily customize tools based on available system capabilities. Service-oriented architectures such as SANDS also provide a means of making more CDS tools available.72 73 In general, it will be important to better understand end-user preferences and workflow habits in order to optimally improve these systems.

The taxonomy of front-end CDS tools described in this paper provides a novel means of assessing currently available decision support tools and it is our hope that this comprehensive taxonomy will also serve as a roadmap for vendors and institutional developers working to expand both the back-end CDS system capabilities and front-end tools in their systems. In addition, our taxonomy may also be of value for informing future certification criteria and stages 2 and 3 meaningful use requirements. Together, this taxonomy and the results of our survey also provide healthcare institutions with a framework for evaluating the capabilities of clinical information systems which may be useful as they evaluate the purchase or development of such systems. As meaningful use requirements continue to expand, more decision support tools will be necessary and it is imperative that healthcare institutions and commercial vendors continue to extend the range of CDS tools available to increase the quality and efficiency of care.

Our method of analyzing commercial and internally developed EHR systems has several potential limitations. First, we surveyed a very small sample of the commercial and home-grown systems currently in use. We employed a purposive sampling strategy in order to capture information about leading vendor-based and internally developed EHRs. However, this strategy limits the conclusions that can be drawn from survey results and their generalizability. Second, the use of a survey to evaluate these systems is a potential source of error due to the possibility that respondents may have inadvertently (or optimistically) misrepresented features of their system. One particular potential concern is highly extensible systems that support add-ons by customers (eg, via medical logic modules or an application programming interface). When asked, we instructed vendors to answer based on decision support types that are made available to customers and not to include types that could conceivably be developed through extension or additional programming. However, it is possible that some vendors still answered affirmatively for decision support types that could theoretically be implemented in their systems, but which have not actually been developed. Third, the survey analyzed systems and their front-end CDS tools as they were designed, rather than how they might be implemented and used in a real-world setting. For vendor systems, there may be a significant gap between the tools that are possible in a given system and those that are actually implemented at a given site. Finally, this project assesses only the presence or absence of each type of CDS tool delineated in the taxonomy, but does not attempt to measure or weight the importance of the tools. Indeed, some tools might be significantly more important than others, so it is not necessarily the case that the system with the highest proportion of CDS types offers the ‘best’ CDS. A system for prioritizing and weighting CDS types would be a useful future research direction. It would also be valuable to repeat the survey of decision support content at customer sites using our taxonomy in order to gauge the validity of vendor responses and to assess the potential gap between systems as they are designed and as they are implemented in the clinical setting.


To assess the clinical decision support capabilities of leading commercial and internally developed EHRs, we developed a comprehensive taxonomy and survey of the types of the front-end CDS tools currently in use. We found wide variability in the decision support tools available in commercial and internally developed EHRs. As pressure to perform more advanced CDS increases, EHR developers will need to incorporate a broader range of CDS tools into their systems.


This project was made possible by the hard work and dedication of the Provider Order Entry Team (POET) at Oregon Health & Science University. We would also like to thank all participants at the Menucha Conference who were instrumental in shaping the final list of clinical decision support types: DW Bates, B Churchill, J Dulcey, R Gibson, N Greengold, R Jenders, T Payne, E Poon, and SL Pestotnik.


Funding: This project was supported by NLM Grant R56-LM006942.

Competing interests: None.

Provenance and peer review: Not commissioned; externally peer reviewed.


1. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005;293:1223–38 [PubMed]
2. Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330:765. [PMC free article] [PubMed]
3. Saleem JJ, Patterson ES, Militello L, et al. Exploring barriers and facilitators to the use of computerized clinical reminders. J Am Med Inform Assoc 2005;12:438–47 [PMC free article] [PubMed]
4. Paterno MD, Maviglia SM, Gorman PN, et al. Tiering drug-drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc 2009;16:40–6 [PMC free article] [PubMed]
5. Roberts GW, Farmer CJ, Cheney PC, et al. Clinical decision support implemented with academic detailing improves prescribing of key renally cleared drugs in the hospital setting. J Am Med Inform Assoc 2010;17:308–12 [PMC free article] [PubMed]
6. Payne TH, Hoey PJ, Nichol P, et al. Preparation and use of preconstructed orders, order sets, and order menus in a computerized provider order entry system. J Am Med Inform Assoc 2003;10:322–9 [PMC free article] [PubMed]
7. Asch SM, McGlynn EA, Hogan MM, et al. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Intern Med 2004;141:938–45 [PubMed]
8. Osheroff JA, Teich JM, Middleton B, et al. A roadmap for national action on clinical decision support. J Am Med Inform Assoc 2007;14:141–5 [PMC free article] [PubMed]
9. Bates DW, Cohen M, Leape LL, et al. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc 2001;8:299–308 [PMC free article] [PubMed]
10. McDonald CJ. Protocol-based computer reminders, the quality of care and the non-perfectability of man. N Engl J Med 1976;295:1351–5 [PubMed]
11. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006;144:742–52 [PubMed]
12. Wright A, Sittig DF, Ash JS, et al. Clinical decision support capabilities of commercially-available clinical information systems. J Am Med Inform Assoc 2009;16:637–44 [PMC free article] [PubMed]
13. Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. N Engl J Med 2010;363:501–4 [PubMed]
14. Comparison of Meaningful Use Objectives Between the Proposed Rule to the Final Rule. Washington, DC: Centers for Medicare & Medicaid Services, 2010
15. Simon SR, Kaushal R, Cleary PD, et al. Physicians and electronic health records: a statewide survey. Arch Intern Med 2007;167:507–12 [PubMed]
16. Wright A, Goldberg H, Hongsermeier T, et al. A description and functional taxonomy of rule-based decision support content at a large integrated delivery network. J Am Med Inform Assoc 2007;14:489–96 [PMC free article] [PubMed]
17. Wang JK, Shabot MM, Duncan RG, et al. A clinical rules taxonomy for the implementation of a computerized physician order entry (CPOE) system. Proc AMIA Symp 2002:860–3 [PMC free article] [PubMed]
18. Miller RA, Waitman LR, Chen S, et al. The anatomy of decision support during inpatient care provider order entry (CPOE): empirical observations from a decade of CPOE experience at Vanderbilt. J Biomed Inform 2005;38:469–85 [PMC free article] [PubMed]
19. Osheroff J, Pifer E, Teich J, et al. Improving Outcomes with Clinical Decision Support. Chicago, IL: HIMSS, 2005
20. Berlin A, Sorani M, Sim I. A taxonomic description of computer-based clinical decision support systems. J Biomed Inform 2006;39:656–67 [PubMed]
21. Driving Quality and Performance Measurement: A Foundation for Clinical Decision Support. Washington, DC: National Quality Forum, 2010
22. Metzger J, Welebob E, Bates DW, et al. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010;29:655–63 [PubMed]
23. Classen DC, Avery AJ, Bates DW. Evaluation and certification of computerized provider order entry systems. J Am Med Inform Assoc 2007;14:48–55 [PMC free article] [PubMed]
24. Sittig DF, Wright A, Osheroff JA, et al. Grand challenges in clinical decision support. J Biomed Inform 2008;41:387–92 [PMC free article] [PubMed]
25. Wright A, Sittig DF. A framework and model for evaluating clinical decision support architectures. J Biomed Inform 2008;41:982–90 [PMC free article] [PubMed]
26. Chertow GM, Lee J, Kuperman GJ, et al. Guided medication dosing for inpatients with renal insufficiency. JAMA 2001;286:2839–44 [PubMed]
27. Teich JM, Merchia PR, Schmiz JL, et al. Effects of computerized physician order entry on prescribing practices. Arch Intern Med 2000;160:2741–7 [PubMed]
28. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007;14:29–40 [PMC free article] [PubMed]
29. Teich JM, Schmiz JL, O'Connell EM, et al. An information system to improve the safety and efficiency of chemotherapy ordering. Proc AMIA Annu Fall Symp 1996:498–502 [PMC free article] [PubMed]
30. Overhage JM, Tierney WM, Zhou XH, et al. A randomized trial of “corollary orders” to prevent errors of omission. J Am Med Inform Assoc 1997;4:364–75 [PMC free article] [PubMed]
31. Lee J, Clay B, Zelazny Z, et al. Indication-based ordering: a new paradigm for glycemic control in hospitalized inpatients. J Diabetes Sci Technol 2008;2:349–56 [PMC free article] [PubMed]
32. Morris AH, Wallace CJ, Menlove RL, et al. Randomized clinical trial of pressure-controlled inverse ratio ventilation and extracorporeal CO2 removal for adult respiratory distress syndrome. Am J Respir Crit Care Med 1994;149:295–305 [PubMed]
33. Tamblyn R, Huang A, Taylor L, et al. A randomized trial of the effectiveness of on-demand versus computer-triggered drug decision support in primary care. J Am Med Inform Assoc 2008;15:430–8 [PMC free article] [PubMed]
34. Hulse RK, Clark SJ, Jackson JC, et al. Computerized medication monitoring system. Am J Hosp Pharm 1976;33:1061–4 [PubMed]
35. Isaac T, Weissman JS, Davis RB, et al. Overrides of medication alerts in ambulatory care. Arch Intern Med 2009;169:305–11 [PubMed]
36. Haug PJ, Gardner RM, Tate KE, et al. Decision support in medicine: examples from the HELP system. Comput Biomed Res 1994;27:396–418 [PubMed]
37. Bradshaw KE, Gardner RM, Pryor TA. Development of a computerized laboratory alerting system. Comput Biomed Res 1989;22:575–87 [PubMed]
38. van der Sijs H, Mulder A, van Gelder T, et al. Drug safety alert generation and overriding in a large Dutch university medical centre. Pharmacoepidemiol Drug Saf 2009;18:941–7 [PubMed]
39. Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc 1996;3:399–409 [PMC free article] [PubMed]
40. Schulmeister L. Look-alike, sound-alike oncology medications. Clin J Oncol Nurs 2006;10:35–41 [PubMed]
41. Félix L, Gebremariam C. New Care Management Event Tracking Module in iCare. Rockville, MD: IHS OIT Newsletter, 2010:7–8
42. Wright A, Chen ES, Maloney FL. An automated technique for identifying associations between medications, laboratory results and problems. J Biomed Inform 2010;43:891–901 [PubMed]
43. Harpole LH, Khorasani R, Fiskio J, et al. Automated evidence-based critiquing of orders for abdominal radiographs: impact on utilization and appropriateness. J Am Med Inform Assoc 1997;4:511–21 [PMC free article] [PubMed]
44. Teich JM, Petronzio AM, Gerner JR, et al. An information system to promote intravenous-to-oral medication conversion. Proc AMIA Symp 1999:415–19 [PMC free article] [PubMed]
45. Evans RS, Wallace CJ, Lloyd JF, et al. ; CDC Prevention Epicenter Program Rapid identification of hospitalized patients at high risk for MRSA carriage. J Am Med Inform Assoc 2008;15:506–12 [PMC free article] [PubMed]
46. Trygstad TK, Christensen D, Garmise J, et al. Pharmacist response to alerts generated from Medicaid pharmacy claims in a long-term care setting: results from the North Carolina polypharmacy initiative. J Manag Care Pharm 2005;11:575–83 [PubMed]
47. Del Fiol G, Haug PJ, Cimino JJ, et al. Effectiveness of topic-specific infobuttons: a randomized controlled trial. J Am Med Inform Assoc 2008;15:752–9 [PMC free article] [PubMed]
48. Bates DW, Kuperman GJ, Wang S, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003;10:523–30 [PMC free article] [PubMed]
49. Bates DW, Kuperman GJ, Jha A, et al. Does the computerized display of charges affect inpatient ancillary test utilization? Arch Intern Med 1997;157:2501–8 [PubMed]
50. Filik R, Purdy K, Gale A, et al. Labeling of medicines and patient safety: evaluating methods of reducing drug name confusion. Hum Factors 2006;48:39–47 [PubMed]
51. Linder JA, Schnipper JL, Tsurikova R, et al. Documentation-based clinical decision support to improve antibiotic prescribing for acute respiratory infections in primary care: a cluster randomised controlled trial. Inform Prim Care 2009;17:231–40 [PubMed]
52. Evans RS, Larsen RA, Burke JP, et al. Computer surveillance of hospital-acquired infections and antibiotic use. JAMA 1986;256:1007–11 [PubMed]
53. Sittig DF, Gardner RM, Morris AH, et al. Clinical evaluation of computer-based respiratory care algorithms. Int J Clin Monit Comput 1990;7:177–85 [PubMed]
54. Barnett GO, Cimino JJ, Hupp JA, et al. DXplain. An evolving diagnostic decision-support system. JAMA 1987;258:67–74 [PubMed]
55. Miller RA, Pople HE, Jr, Myers JD. Internist-1, an experimental computer-based diagnostic consultant for general internal medicine. N Engl J Med 1982;307:468–76 [PubMed]
56. Peiris DP, Joshi R, Webster RJ, et al. An electronic clinical decision support tool to assist primary care providers in cardiovascular disease risk management: development and mixed methods evaluation. J Med Internet Res 2009;11:e51. [PMC free article] [PubMed]
57. Ravdin PM, Siminoff LA, Davis GJ, et al. Computer program to assist in making decisions about adjuvant therapy for women with early breast cancer. J Clin Oncol 2001;19:980–91 [PubMed]
58. Rothschild JM, McGurk S, Honour M, et al. Assessment of education and computerized decision support interventions for improving transfusion practice. Transfusion 2007;47:228–39 [PubMed]
59. Lehmann CU, Conner KG, Cox JM. Preventing provider errors: online total parenteral nutrition calculator. Pediatrics 2004;113:748–53 [PubMed]
60. Bleich HL. Computer evaluation of acid-base disorders. J Clin Invest 1969;48:1689–96 [PMC free article] [PubMed]
61. Mohan R, Barest G, Brewster LJ, et al. A comprehensive three-dimensional radiation treatment planning system. Int J Radiat Oncol Biol Phys 1988;15:481–95 [PubMed]
62. North F, Varkey P. Use of the prioritization matrix to enhance triage algorithms in clinical decision support software. Am J Med Qual 2010;25:468–73 [PubMed]
63. Mandl KD, Overhage JM, Wagner MM, et al. Implementing syndromic surveillance: a practical guide informed by the early experience. J Am Med Inform Assoc 2004;11:141–50 [PMC free article] [PubMed]
64. Jacobs B, Crotty E, Conway E, et al. Computerized Provider Order Entry with Pager Notification Improves Efficiency in STAT Radiographic Studies and Respiratory Treatments. Appl Clin Inform 2010;1:19–31 [PMC free article] [PubMed]
65. McGlinchey EA, Wright A, Poon EG, et al. Ability to perform registry functions among practices with and without electronic health records. AMIA Annu Symp Proc 2008:1052. [PubMed]
66. Hamann C, Poon E, Smith S, et al. Designing an electronic medication reconciliation system. AMIA Annu Symp Proc 2005:976. [PMC free article] [PubMed]
67. Topal J, Conklin S, Camp K, et al. Prevention of nosocomial catheter-associated urinary tract infections through computerized feedback to physicians and a nurse-directed protocol. Am J Med Qual 2005;20:121–6 [PubMed]
68. Buising KL, Thursky KA, Robertson MB, et al. Electronic antibiotic stewardship–reduced consumption of broad-spectrum antibiotics using a computerized antimicrobial approval system in a hospital setting. J Antimicrob Chemother 2008;62:608–16 [PubMed]
69. Levin MA, Krol M, Doshi AM, et al. Extraction and mapping of drug names from free text to a standardized nomenclature. AMIA Annu Symp Proc 2007:438–42 [PMC free article] [PubMed]
70. Rosenbloom ST, Denny JC, Xu H, et al. Data from clinical notes: a perspective on the tension between structure and flexible documentation. J Am Med Inform Assoc 2011;18:181–6 [PMC free article] [PubMed]
71. Wright A, Sittig DF, Ash JS, et al. Governance for clinical decision support: case studies and recommended practices from leading institutions. J Am Med Inform Assoc 2011;18:187–94 [PMC free article] [PubMed]
72. Wright A, Sittig DF. SANDS: an architecture for clinical decision support in a National Health Information Network. AMIA Annu Symp Proc 2007:816–20 [PMC free article] [PubMed]
73. CDS Consortium (accessed 2 Jan 2011).

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of American Medical Informatics Association