Among both internally developed and commercial systems, there was significant variability in the available front-end CDS tools as designed. While more than one system had over 90% of the surveyed CDS tools, others had less than 60% and one commercial system had only 28.3%. Several were present in all 11 systems, while others (including polypharmacy alerts, treatment planning, look-alike/sound-alike medication alerts, diagnostic support, prognostic tools, ventilator support, and free text order parsing) were present in as few as three of the systems surveyed. Not surprisingly, the most common CDS tools were generally the simplest, such as drug–drug interaction checking, while the least common were advanced expert systems such as treatment planning and diagnostic support. In general, ambulatory EHRs had a lower proportion of surveyed CDS functions when compared with inpatient EHRs.
Our findings also show that certain classes of CDS tools are more commonly available. Dosing support (eg, default doses/pick lists) and order facilitators (eg, condition-specific order sets) were the most common classes of CDS tools available while expert systems (eg, ventilator support) was the least common class. The variation in availability of different CDS categories is not surprising given that each requires differing knowledge bases and varying expertise. While all forms necessitate significant investments (both financial and otherwise), vendors and healthcare institutions may preferentially avoid incorporating the most resource-intensive content into their systems.
Overall, the results of our survey indicate that although a diverse range of CDS tools exists in both vendor and internally developed EHR systems, there remains significant room for improvement in making these tools more widely and consistently available. Given that our sample of commercial and internally developed systems represents some of the most advanced and most widely used systems and assesses their optimum CDS capabilities, our results indicate that the general availability of decision support tools remains limited even in the best of cases.
It is important to consider that these results are based on each system as it is designed, not as it is actually implemented and used at real-world sites. The gap between the available tools as a system is designed and how that system is actually implemented and used in clinical practices can be substantial, specifically in the case of commercially developed EHR systems. While vendors may incorporate a certain CDS tool into their system, whether that tool is ultimately available to the end-user is highly dependent on institutional priorities, governance practices, and implementation procedures.71
In this project, we examine the off-the-shelf CDS tools as designed in a purposive sample of leading EHRs. In evaluating a commercial EHR for possible adoption, it is important to consider both the tools that are available as designed or ‘out-of-the-box’ and what tools will actually be implemented based on the priorities and needs of the institution. Each institution, whether developing ‘home-grown’ systems or purchasing one from an outside vendor, needs to consider the specific decision support tools that are right for them and prioritize different types of CDS based on institutional needs.
Consideration of both back-end system capabilities and front-end tools is vitally important for the evaluation and development of EHR systems. Off-the-shelf systems may offer ready-to-use tools but may limit the ability to customize these tools through different combinations of CDS system capabilities. In contrast, a home-grown system with robust CDS system capabilities may offer a great deal of flexibility but may also require a greater investment of time, resources, and expertise to create front-end tools. In general, as long as a system includes enough basic system capabilities, the end-user can create any type of CDS tool. Realistically, however, the end-user may lack the time, resources, expertise, or creativity to create tools by combining available system capabilities.
There are a variety of ways to promote broader availability of CDS tools for the system end-user. One solution is simply for vendors and institutional developers to expand the variety of CDS tools available in their systems, which we hope they will continue to do in light of these results. However, given that this might not be feasible in all cases, additional means are necessary for increasing the availability of a range of CDS tools. One such solution is the use of external CDS tools (including web or software-based tools) that can add third-party content by ‘talking’ to the EHR via an application programming interface. Another option is the use of general purpose rule engines, which allow end-users to more easily customize tools based on available system capabilities. Service-oriented architectures such as SANDS also provide a means of making more CDS tools available.72
In general, it will be important to better understand end-user preferences and workflow habits in order to optimally improve these systems.
The taxonomy of front-end CDS tools described in this paper provides a novel means of assessing currently available decision support tools and it is our hope that this comprehensive taxonomy will also serve as a roadmap for vendors and institutional developers working to expand both the back-end CDS system capabilities and front-end tools in their systems. In addition, our taxonomy may also be of value for informing future certification criteria and stages 2 and 3 meaningful use requirements. Together, this taxonomy and the results of our survey also provide healthcare institutions with a framework for evaluating the capabilities of clinical information systems which may be useful as they evaluate the purchase or development of such systems. As meaningful use requirements continue to expand, more decision support tools will be necessary and it is imperative that healthcare institutions and commercial vendors continue to extend the range of CDS tools available to increase the quality and efficiency of care.
Our method of analyzing commercial and internally developed EHR systems has several potential limitations. First, we surveyed a very small sample of the commercial and home-grown systems currently in use. We employed a purposive sampling strategy in order to capture information about leading vendor-based and internally developed EHRs. However, this strategy limits the conclusions that can be drawn from survey results and their generalizability. Second, the use of a survey to evaluate these systems is a potential source of error due to the possibility that respondents may have inadvertently (or optimistically) misrepresented features of their system. One particular potential concern is highly extensible systems that support add-ons by customers (eg, via medical logic modules or an application programming interface). When asked, we instructed vendors to answer based on decision support types that are made available to customers and not to include types that could conceivably be developed through extension or additional programming. However, it is possible that some vendors still answered affirmatively for decision support types that could theoretically be implemented in their systems, but which have not actually been developed. Third, the survey analyzed systems and their front-end CDS tools as they were designed, rather than how they might be implemented and used in a real-world setting. For vendor systems, there may be a significant gap between the tools that are possible in a given system and those that are actually implemented at a given site. Finally, this project assesses only the presence or absence of each type of CDS tool delineated in the taxonomy, but does not attempt to measure or weight the importance of the tools. Indeed, some tools might be significantly more important than others, so it is not necessarily the case that the system with the highest proportion of CDS types offers the ‘best’ CDS. A system for prioritizing and weighting CDS types would be a useful future research direction. It would also be valuable to repeat the survey of decision support content at customer sites using our taxonomy in order to gauge the validity of vendor responses and to assess the potential gap between systems as they are designed and as they are implemented in the clinical setting.