|Home | About | Journals | Submit | Contact Us | Français|
Human factors (HF) studies are increasingly important as technology infuses into clinical settings. No nursing research reviews exist in this area. The authors conducted a systematic review on designs of clinical technology, 34 articles with 50 studies met inclusion criteria. Findings were classified into three categories based on HF research goals. The majority of studies evaluated effectiveness of clinical design; efficiency was fewest. Current research ranges across many interface types examined with no apparent pattern or obvious rationale. Future research should expand types, settings, participants; integrate displays; and expand outcome variables.
Having usable technology is an imperative for contemporary nurses. Less optimal technology designs affect error generation, productivity, create extreme frustration and even result in system de-installation. The design and development of usable technology can better be assured by using human factors (HF) concepts. HF principles, research methods and techniques are widely available outside health care to enhance nurse-technology interaction effectiveness, efficiency and user satisfaction. Yet, these critical elements only trickled into health care in the early 1990s despite having completely penetrated other industries such as aviation.
The Institute of Medicine ushered HF concepts into the health care consciousness by linking HF to error prevention.1 Research in HF, usability and human-computer interaction, all related concepts, expanded greatly over the past 10–15 years. However, no review exist examining available HF-related research or its diffusion into the nursing arena. Thus, the purposes of this paper are to: 1) systematically review the literature for HF-related research in health care, 2) evaluate the impact to nursing areas of interest and 3) recommend future research directions.
Human factors is a broad term for a set of related concepts about human interactions with tools in associated environments. Figure 1 depicts these concepts and their relationships.2 All HF-related concepts consider human needs, abilities, and limitations, including cognitive aspects, and assert an axiom of user-centered design.3 4 Human factors encompasses the design, use, and evaluation of tools in a broad sense to a wide variety of tools – for instance the design and use of an airplane cockpit, the design of a hammer to fit the female human hand or incorporating known concepts about human memory and attention to improve work systems for successful sponge counts in an operating room. Ergonomics emphasizes physical attributes and designs of tools such as the size of lettering on IV pump so that labels are viewable from across the patient’s bed, the design of a computer mouse or the layout of equipment in an intensive care unit to promote optimal workflow. Human-computer interaction focuses on computers and applications for humans while its closely related concept, usability, stresses the design, interaction and evaluation of both devices and computer applications by examining specific tasks and interaction outcomes within particular contexts. Examples include the design of an electronic medication administration record for multidisciplinary use, and its subsequent re-design for specific tasks unique to an emergency department setting. Human-computer interaction can also include the design of software to support a group of users working on a shared document or social sanctions from inappropriate blogs among a group of clinicians discussing cardiomyopathy research.
The unique methods available from the HF domain allow researchers to elicit critical thought processes (e.g., cognitive task analysis), work methods (e.g., naturalistic observation) and/or tasks that are crucially important for the design of tools, devices and information systems. Research methods such as ethnographic and qualitative techniques are also useful in defining key user requirements for tools and evaluating existing tools for effectiveness.
Most important, the commonly held goals of human-factors are to improve the effectiveness, efficiency, and satisfaction of humans interacting with tools (see Figure 2).5 Effectiveness includes the usefulness of a tool to complete work (tasks), and the safety of the tool. Examples of efficiency include productivity such as the time to complete specific tasks, the number of clicks to perform tasks, the costs of the tools and/or the amount of training time needed for users to learn a software application. Satisfaction can include the perception of any aspect of the tool and typically includes perceptions about workload or the effectiveness of the specific design.
In this review, we focus on the design and evaluation of user interfaces for clinical technology. Optimal technology design is vital to health care because the work and associated tools can be life-critical. For example, in a tragic event, faulty software design for controls in a radiation machine caused a patient to scream in pain during treatment and later die because of a radiation overdose.6 Zhang, et al.7 and Graham, et al.8 both outline serious usability problems with IV pumps, including issues that are likely to cause medical errors. Given the considerable impact of HF in health care, we examined available research about the design of clinical technology organized using the goals of HF: design effectiveness, efficiency and satisfaction.
Formal methods were used to perform a systematic review and assure a thorough search and retrieval process. Procedures included article relevance assessments, data extraction and data analysis.9 Poor quality studies were not eliminated, as is common in many systematic reviews, because our goal was to describe the available HF research in health care. The years 1980–2009 were included. Substantial technology changes for devices and information systems since 1980 would make earlier references not pertinent. Criteria for inclusion were: Peer review publications in English; stated research findings; any study design or method from any country; analyses of medical devices, tools, user interfaces, clinical information systems, electronic health records in healthcare environments; any user including health providers or patients. Excluded articles were studies about: ergonomics (e.g. Cumulative Trauma Disorders, occupational medicine); in conference proceedings; about medical transcription devices; descriptions of human factors-related concepts without research findings; usability analyses in non-healthcare settings, designs solely for patients and descriptions of work activities or error analyses.
Extensive literature searches were conducted using the research databases Cumulative Index of Nursing and Allied Health Literature (CINAHL), Ovid MEDLINE, PsycINFO, INSPEC, and the EBM Reviews: Health Technology Assessment Database (CLHTA) from 1980 to 2009. Key search terms were: (Human Computer Interaction or HCI) and (Human factors or Usability) and (health$ or health care or medical) and (nurs$). Reference lists of publications were checked for any additional references. Authors independently reviewed citations for relevancy and applied the relevancy criteria; any questionably relevant articles were discussed until consensus was reached. The authors focused on technology targeted to clinicians only.
The search criteria yielded a total of 11,916 articles; delimiting articles to those with health$ or health care or medical terms resulted in 2,234 articles; again delimiting this search to manuscripts with a nursing emphasis resulted in 215 articles. The abstracts from this set of 215 articles were reviewed; 34 articles met the relevance criteria. These articles are summarized in Table 1 with all usability findings. Authors of 18/34 articles examined 17 different application or screen design interfaces, authors of 6/34 studies evaluated 5 different graphical interfaces, 5/34 different remote/telemedicine systems and 5/34 different medical device user interfaces.
Authors included multiple outcome variables; these details are found in Table 2 divided into 50 separate studies. Studies were then classified into three categories based upon goals for human factors research: effectiveness (24/50), efficiency (10/50) and satisfaction (16/50). The study design and aims, sample, setting, methods, and findings were extracted from each relevant article.
Authors of 24 studies evaluated effectiveness aspects of user interfaces. Effectiveness is the usefulness and safety of an interface in completing a task (See Figure 2). Authors of seven studies illustrate the variability of types of software being tested, for example, the usefulness of software that automatically created a family pedigree diagram from family history data, a mobile medical emergency services medical record for paramedics, a laboratory procedures system, and a nurse practitioners outcomes database with graphics.10–13 Researchers have found that users were more successful searching for information on homegrown interfaces versus proprietary ones, users prefer systems that reduce cognitive effort, and that complex queries could be answered more successfully with graphical interfaces vs paper.10, 14–16 In device/system reviews using heuristics, researchers also found severe usability problems caused by limited information visibility and faulty data synchronization, possibly leading to medical errors. Also, limited system flexibility, poor navigation systems caused users to get lost in the application and confusion about what labels mean led to potential for patient harm.11, 17 To avoid some of these circumstances early in the design process researchers recommend including users in development lifecycle to identify users needs and expectations of design requirements.18, 19
Authors of four studies examined the effectiveness of graphical interface designs on clinician decision making for stroke patients, ventilator-dependent patients, patients requiring hemodynamic monitoring and the safety of using a novel electronic medication administration record. Graphical designs improved initiating treatments, determining needed medications, and detecting patients’ deviations from normal physiological parameters; visual cognitive learning styles (versus verbal) resulted in better ability for clinicians to keep vital signs within a target range with advanced physiologic monitoring interfaces.20–22 However, nurses’ medication accuracy was low for medication tasks that required them to scroll beyond the current field of view in a new graphical medication record, despite substantial training with the interface.23
The authors of two studies evaluated the usability of IV pumps and judged their compliance with recognized design guidelines called heuristics. Authors found heuristic violations or non-compliance with recommended design guidelines for two different 1-channel volumetric IV pumps from two different vendors,7 and one 3-channel pump commonly used in the ICU setting.8 The vendors and model numbers were not provided. The heuristic for consistency was violated most frequently. Inconsistencies do not allow users to determine the clear meaning of interface elements such as labels. For example, one pump button labeled “off” for one infusion channel could be confused with the pump “stop” button. Authors found catastrophic usability errors in IV pumps. In one study, a pump adjustment was hidden on the rear of pump handle; this location may cause an inadvertent setting change when a user is just moving the pump. More important, the location makes the button hard to locate to readjust the pump back to normal.7
Two studies included evaluation of patient controlled analgesia (PCA) pumps. In these studies complex programming sequences and multiple user modes increased mental workload of nurses; a redesign of the PCA interfaces improved cognitive loads and potential errors in programming the devices.24, 25 Another set of authors caution that devices can be very confusing when they look like a familiar object (a pen) but behave differently (the cap on the pen was a power button).26 These kinds of designs can result in increased cognitive burden, training and/or redesign.
Authors of remote/mobile device studies examined telemedicine in home health environments,27 electronic diabetes management programs28–30 and a hand held electronic medical record for physicians.31 Sound and visual quality during patient assessments interfered with effective assessments. A mismatch between manual nursing assessment practices and an early telemedicine device design caused delays and difficulties in completing care assessments.
Two different clinical decision support systems were evaluated, a cancer detection system and clinical reminders for HIV patients.32, 33 Researchers assessed the ability of a system to accurately diagnose and inform clinicians. In the HIV reminder study, researchers uncovered barriers that reduced effectiveness of the reminders: workload, time required to document information about the reminder and duplication paper form systems, among others.
One set of authors evaluated a commercial electronic health record in a clinical setting.34 Researchers identified a total of 134 usability issues; 13 (10%) were potentially severe. For example, long, multi-level screens were confusing to use during admission documentation procedures while clinicians simultaneously obtained a medical history from patients; subsequently, clinical documents in the EHR had to be reconfigured by the vendor before use.
Efficiency aspects (Figure 2) examine productivity (time), costs, efficiency errors, and learnability (defined as the capability of a software product in enabling a user to learn how to use it). Accuracy is also important here because inaccuracy in keystrokes takes more time, impacting user costs and productivity. Five of the 10 efficiency studies were evaluations of graphical interfaces (5/10). For example, researchers found that a 3-fold increase in information density on screens allowed users to be twice as fast while not impacting accuracy. Users do not have to page between screens to find data.35 Graphical user interface design compared to text or paper systems also allowed clinician users to be twice as fast and more accurate in keystrokes.15,36–38
New user interfaces enhanced users’ performance. Researchers demonstrated that improved designs for PCA pumps allowed users to avoid complex programming sequences reducing the time and errors.24, 25 Design can impact search times for clinical information. One study compared search times for patient care guidelines among different displays and found that users spent nearly twice the search time with one display due to poor document format and organization in the interface.14
User satisfaction is measured by perceived effectiveness or perceived efficiency of the user interface. Satisfaction was measured in 16/50 studies; new interfaces involving user input for graphical displays and redesigned interfaces of all kinds had higher satisfaction ratings. User satisfaction was measured in studies that evaluated new types of software for clinical processes like medication administration, order entry, or documenting on transplant patients (See Table 1). Usability problems that negatively affect user satisfaction with interfaces included system inflexibility, poor navigation, poor information quality, lack of control of the system, limited visibility of system status.39, 40 Researchers found that users want interfaces that are intuitive, formats that allow visible data input; e.g., for birthdates (e.g. MM,DD,YYYY), and include consolidated information with high level information presented first.
Clinicians want technology that is easier to operate and is easy to understand, such as alarms with fewer hierarchical levels.22, 41 To obtain favorable user satisfaction results, technologically savvy clinicians also want an option to customize the interface for their own use, for example, some clinicians want to dial in their target ranges on specific measurement levels for their patients.28
This systematic review outlines the existing research for the design of clinical technology across its outcomes of effectiveness, efficiency and satisfaction. The majority of current studies evaluated effectiveness aspects of clinical technology interfaces. Studies about interface efficiency were fewest in number. Of course, a blend of these goals would be optimal to assure efficient and effective clinical technology design.
Current research ranges across a myriad of technology interface types. The types of interfaces examined to date have no apparent pattern nor have they been assessed with any obvious rationale such as their frequency of use in clinical settings.
Although usability studies have not yet penetrated health care widely, researchers have discovered elements of design worth attention. For example, dense screens are faster for nurses’ information detection than and still as accurate as less dense screens. Thus, designers will want to include dense screens in systems so that clinicians avoid unnecessary movement between screens to search for information. The caveat is that dense screens need to include pertinent information, which means that designers will need to understand how clinicians make decisions and with what information. More careful attention should be paid to attention-grabbing methods for data located outside nurses’ field of view as it can easily be missed even when nurses are trained on an application.
Graphical designs facilitate both efficiency and effectiveness measures. These designs improve time to treatment, detecting physiologic parameter deviations and time to complete a wide variety of tasks (e.g., orders, lab procedures, searching for clinical data). A graphical design is especially important for tasks requiring navigation across applications or screens in a system and can improve performance as much as two-fold.36
Researchers overall found improvements in redesigns of older interfaces and with iterative designs created in combination with user testing. Initially, readers might ascribe this finding to a publication bias; however, its prevalence across so many studies can also confirm the validity of the usability axioms of user-centered design and the value of usability testing.
Device evaluations and the sole assessment of an active EHR uncovered serious usability issues such as safe programming of PCA, IV pumps and designs that interfered with critical processes such as documenting an admission history. Serious usability issues can be alarming, for instance, nurses were able to program a pump to give an inadvertent overdose without an alarm or warning. The Food and Drug Administration (FDA) currently requires usability testing for devices; however, the seriousness of the findings in the handful of studies here suggests that the FDA expand usability testing, that facilities assess the usability of devices as part of their purchasing processes and that a department such as quality improvement evaluate devices for their safety in their institutions, especially older ones.
Recommendations for future research are made in these areas: a) Expand the types, settings and participants for usability testing, b) Develop integrated displays, c) Expand outcome variables in usability studies.
The types of evaluated devices are limited to date. The interfaces for a handful of devices were formally evaluated, including two IV and two PCA pumps. A systematic method for evaluation is needed such as assessing devices based upon their prevalence and use in clinical settings. Obviously, many more devices exist in the clinical setting than were examined to date. Just in an ICU setting alone, numerous physiological monitors and devices (invasive and noninvasive) have an array of alarms with distinctive tones, blinking lights of different colors and shapes, all demanding attention.
Common tools such as IV pumps and the one evaluated EHR had serious usability violations. To ensure safe practice, usability evaluations of clinical technology tools need to be greatly expanded to alleviate potential hazards. Even more important, usability studies are critically needed to examine the cognitive burden, errors and workflow issues that may exist across devices in clinical settings. How nurses learn, remember and use the myriad of devices is worthy of more investigation. How to design technology to work symbiotically across tools is needed. A national database is needed for known devices assessments particularly for older models with known safety issues.
The Institute of Medicine42 (IOM) encourages the adoption of health information technology as one solution to medical errors. Yet, only one set of authors evaluated an active EHR. HIMSS Analytics reported that over 1,300 US hospitals have at least computerized clinical documentation in place.43 With the impetus to increase EHR implementations, increased health information technology funding in 2009, and the increasing infiltration of EHRs into diverse sites, usability assessments of commercial EHRs are needed to better understand the impacts of these products. Although some vendors incorporate prototyping and usability testing into their development cycles, this practice is not yet widespread. EHR components should be rigorously and iteratively tested using human factors principles by vendors, representative end users and HF experts to assure adequate design before installation.
The majority of tested technologies are those in clinical practice. The findings from these studies are striking, illustrating sources of potential error. Technology used in educational and administrative functions is under-represented. Expanding usability testing into these arenas would be welcomed. HF evaluations of curricular software, especially commercially available products, is needed. Usability evaluations would provide important details about successes and failures for others as they plan to implement new models of learning. Optimal interfaces for nurse executives and administrators are another area for promising research.
The majority of current research settings are laboratories or simulated clinical settings. In the future, studies in naturalistic settings are highly encouraged. These kinds of settings would allow researchers to examine the role of interruptions, competing demands and other typical work issues within the context of their particular technology design. Naturalistic settings would provide researchers with new knowledge and understanding about how technologies are actually used in clinical practice versus artificial settings. Understanding work-arounds nurses create and competing demands would be illuminating.
Interdisciplinary teams participated in 2 device studies; interface assessments included 11 interdisciplinary teams. The IV pump studies and two graphical interfaces studies used psychology studies participants. Actual clinical users should be included in the future across types of nurses including nurse anesthetists, seemingly absent from usability studies to date.
More studies are needed to emulate the kinds of teamwork that occurs with clinical technology in sites. For instance, nurses and pharmacists are under-represented in evaluations of the impact of computerized provider order entry despite the fact that they are both integral to the orders management process and safe execution of orders.44
Computerized support is needed to help nurses integrate information across devices and EHR applications. These integrated data summaries would display pertinent patient data, such as at change of shift. Currently, nurses must integrate data and information from devices and EHRs themselves, typically by remembering data.45 Nurses gather data from various sources organize the information and apply knowledge to recognize untoward trends or symptoms. Clinicians currently complain that the “big picture” of the patient is difficult to obtain with the sea of data in contemporary EHRs. A recent report from the Academies Press46 recognized the urgent need for better cognitive support from EHRs, including help integrating data.
The most commonly examined outcome variables were user satisfaction, heuristic violations, time and errors. User satisfaction was an outcome variable in 16 studies. Yet, user satisfaction provides only a partial insight into technology design. A better assessment would allow investigators to understand why a design improves satisfaction. Plus, researchers nearly all claim high user satisfaction, although this finding may be due to a publication bias. Other variables such as performance measures (time, accuracy) and aspects of decision-making (correct treatment, detecting adverse events, and patient safety errors) may be more telling aspects of usability evaluations. An expanded list of variables is available elsewhere.2 Thoughtfully chosen outcome variables should be mainstays of future usability research. EHRs in particular should be evaluated from a multi-modal perspective to assess both efficiency and effectiveness aspects.
Last, the gap between research and practice needs to be bridged. Interface evaluation and products from research proved useful and productive. Yet, research products often remain fixed in the research arena. In the future, bridging this gap should be part of the researcher’s agenda.
This review included literature available in refereed journals. Other relevant studies may be available in dissertations, reports and unpublished venues. In the future, other authors may wish to examine studies from conference proceedings and in other languages besides English. Synthesizing results across this myriad of studies, variables, devices, methods and participants was particularly challenging. Additional insights are possible in this body of work.
Usability analyses are critically needed in clinical care settings to evaluate the myriad of equipment, monitors, and software used by health care providers to care for patients. These kinds of analyses provide necessary information about the cognitive workload, workflow changes, and errors occurring from poor technology design. More examinations that include unstudied nursing specialties and settings are needed to provide rich, detailed accounts of experiences with clinical technology. More interdisciplinary work is needed to ensure that clinical systems are designed for maximum benefit of all stakeholders, to increase understanding of information needs and requirements across settings, and to understand shared user performance with devices. Research needs to be conducted in actual practice settings, rural and community settings to outline excellent and less optimal technology designs. Expanding this area of research would enable a better fit between nurses and technology to reduce errors and increase nurses’ productivity.
The project was supported by grant number K08HS016862 from the Agency for Healthcare Research and Quality (Alexander, PI). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.
Greg Alexander PhD, University of Missouri, Sinclair School of Nursing S415, Columbia MO 65211, Phone: 573-882-9346, Fax: 573-884-4544.
Nancy Staggers, Informatics Program, College of Nursing, 10 S. 2000 E, University of Utah, Salt Lake City, UT 84108, Phone: 801.699.0112, Fax: 801.581.4297.