This systematic review outlines the existing research for the design of clinical technology across its outcomes of effectiveness, efficiency and satisfaction. The majority of current studies evaluated effectiveness aspects of clinical technology interfaces. Studies about interface efficiency were fewest in number. Of course, a blend of these goals would be optimal to assure efficient and effective clinical technology design.
Current Research on Technology Design
Current research ranges across a myriad of technology interface types. The types of interfaces examined to date have no apparent pattern nor have they been assessed with any obvious rationale such as their frequency of use in clinical settings.
Although usability studies have not yet penetrated health care widely, researchers have discovered elements of design worth attention. For example, dense screens are faster for nurses’ information detection than and still as accurate as less dense screens. Thus, designers will want to include dense screens in systems so that clinicians avoid unnecessary movement between screens to search for information. The caveat is that dense screens need to include pertinent information, which means that designers will need to understand how clinicians make decisions and with what information. More careful attention should be paid to attention-grabbing methods for data located outside nurses’ field of view as it can easily be missed even when nurses are trained on an application.
Graphical designs facilitate both efficiency and effectiveness measures. These designs improve time to treatment, detecting physiologic parameter deviations and time to complete a wide variety of tasks (e.g., orders, lab procedures, searching for clinical data). A graphical design is especially important for tasks requiring navigation across applications or screens in a system and can improve performance as much as two-fold.36
Researchers overall found improvements in redesigns of older interfaces and with iterative designs created in combination with user testing. Initially, readers might ascribe this finding to a publication bias; however, its prevalence across so many studies can also confirm the validity of the usability axioms of user-centered design and the value of usability testing.
Device evaluations and the sole assessment of an active EHR uncovered serious usability issues such as safe programming of PCA, IV pumps and designs that interfered with critical processes such as documenting an admission history. Serious usability issues can be alarming, for instance, nurses were able to program a pump to give an inadvertent overdose without an alarm or warning. The Food and Drug Administration (FDA) currently requires usability testing for devices; however, the seriousness of the findings in the handful of studies here suggests that the FDA expand usability testing, that facilities assess the usability of devices as part of their purchasing processes and that a department such as quality improvement evaluate devices for their safety in their institutions, especially older ones.
Future Research Directions
Recommendations for future research are made in these areas: a) Expand the types, settings and participants for usability testing, b) Develop integrated displays, c) Expand outcome variables in usability studies.
Expand the Types of Evaluations, Settings and Participants
Types of Evaluations
The types of evaluated devices are limited to date. The interfaces for a handful of devices were formally evaluated, including two IV and two PCA pumps. A systematic method for evaluation is needed such as assessing devices based upon their prevalence and use in clinical settings. Obviously, many more devices exist in the clinical setting than were examined to date. Just in an ICU setting alone, numerous physiological monitors and devices (invasive and noninvasive) have an array of alarms with distinctive tones, blinking lights of different colors and shapes, all demanding attention.
Common tools such as IV pumps and the one evaluated EHR had serious usability violations. To ensure safe practice, usability evaluations of clinical technology tools need to be greatly expanded to alleviate potential hazards. Even more important, usability studies are critically needed to examine the cognitive burden, errors and workflow issues that may exist across devices in clinical settings. How nurses learn, remember and use the myriad of devices is worthy of more investigation. How to design technology to work symbiotically across tools is needed. A national database is needed for known devices assessments particularly for older models with known safety issues.
The Institute of Medicine42
(IOM) encourages the adoption of health information technology as one solution to medical errors. Yet, only one set of authors evaluated an active EHR. HIMSS Analytics reported that over 1,300 US hospitals have at least computerized clinical documentation in place.43
With the impetus to increase EHR implementations, increased health information technology funding in 2009, and the increasing infiltration of EHRs into diverse sites, usability assessments of commercial EHRs are needed to better understand the impacts of these products. Although some vendors incorporate prototyping and usability testing into their development cycles, this practice is not yet widespread. EHR components should be rigorously and iteratively tested using human factors principles by vendors, representative end users and HF experts to assure adequate design before
The majority of tested technologies are those in clinical practice. The findings from these studies are striking, illustrating sources of potential error. Technology used in educational and administrative functions is under-represented. Expanding usability testing into these arenas would be welcomed. HF evaluations of curricular software, especially commercially available products, is needed. Usability evaluations would provide important details about successes and failures for others as they plan to implement new models of learning. Optimal interfaces for nurse executives and administrators are another area for promising research.
The majority of current research settings are laboratories or simulated clinical settings. In the future, studies in naturalistic settings are highly encouraged. These kinds of settings would allow researchers to examine the role of interruptions, competing demands and other typical work issues within the context of their particular technology design. Naturalistic settings would provide researchers with new knowledge and understanding about how technologies are actually used in clinical practice versus artificial settings. Understanding work-arounds nurses create and competing demands would be illuminating.
Interdisciplinary teams participated in 2 device studies; interface assessments included 11 interdisciplinary teams. The IV pump studies and two graphical interfaces studies used psychology studies participants. Actual clinical users should be included in the future across types of nurses including nurse anesthetists, seemingly absent from usability studies to date.
More studies are needed to emulate the kinds of teamwork that occurs with clinical technology in sites. For instance, nurses and pharmacists are under-represented in evaluations of the impact of computerized provider order entry despite the fact that they are both integral to the orders management process and safe execution of orders.44
Develop Integrated Displays
Computerized support is needed to help nurses integrate information across devices and EHR applications. These integrated data summaries would display pertinent patient data, such as at change of shift. Currently, nurses must integrate data and information from devices and EHRs themselves, typically by remembering data.45
Nurses gather data from various sources organize the information and apply knowledge to recognize untoward trends or symptoms. Clinicians currently complain that the “big picture” of the patient is difficult to obtain with the sea of data in contemporary EHRs. A recent report from the Academies Press46
recognized the urgent need for better cognitive support from EHRs, including help integrating data.
Expand Outcome Variables in Usability Studies
The most commonly examined outcome variables were user satisfaction, heuristic violations, time and errors. User satisfaction was an outcome variable in 16 studies. Yet, user satisfaction provides only a partial insight into technology design. A better assessment would allow investigators to understand why
a design improves satisfaction. Plus, researchers nearly all claim high user satisfaction, although this finding may be due to a publication bias. Other variables such as performance measures (time, accuracy) and aspects of decision-making (correct treatment, detecting adverse events, and patient safety errors) may be more telling aspects of usability evaluations. An expanded list of variables is available elsewhere.2
Thoughtfully chosen outcome variables should be mainstays of future usability research. EHRs in particular should be evaluated from a multi-modal perspective to assess both efficiency and effectiveness aspects.
Last, the gap between research and practice needs to be bridged. Interface evaluation and products from research proved useful and productive. Yet, research products often remain fixed in the research arena. In the future, bridging this gap should be part of the researcher’s agenda.
This review included literature available in refereed journals. Other relevant studies may be available in dissertations, reports and unpublished venues. In the future, other authors may wish to examine studies from conference proceedings and in other languages besides English. Synthesizing results across this myriad of studies, variables, devices, methods and participants was particularly challenging. Additional insights are possible in this body of work.