PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
ANS Adv Nurs Sci. Author manuscript; available in PMC 2011 January 11.
Published in final edited form as:
PMCID: PMC3018768
NIHMSID: NIHMS255635

A Systematic Review on the Designs of Clinical Technology: Findings and Recommendations for Future Research

Greg Alexander PhD, RN, Assistant Professor and Nancy Staggers, PhD, RN, FAAN, Professor, Informatics and Director

Abstract

Human factors (HF) studies are increasingly important as technology infuses into clinical settings. No nursing research reviews exist in this area. The authors conducted a systematic review on designs of clinical technology, 34 articles with 50 studies met inclusion criteria. Findings were classified into three categories based on HF research goals. The majority of studies evaluated effectiveness of clinical design; efficiency was fewest. Current research ranges across many interface types examined with no apparent pattern or obvious rationale. Future research should expand types, settings, participants; integrate displays; and expand outcome variables.

INTRODUCTION

Having usable technology is an imperative for contemporary nurses. Less optimal technology designs affect error generation, productivity, create extreme frustration and even result in system de-installation. The design and development of usable technology can better be assured by using human factors (HF) concepts. HF principles, research methods and techniques are widely available outside health care to enhance nurse-technology interaction effectiveness, efficiency and user satisfaction. Yet, these critical elements only trickled into health care in the early 1990s despite having completely penetrated other industries such as aviation.

The Institute of Medicine ushered HF concepts into the health care consciousness by linking HF to error prevention.1 Research in HF, usability and human-computer interaction, all related concepts, expanded greatly over the past 10–15 years. However, no review exist examining available HF-related research or its diffusion into the nursing arena. Thus, the purposes of this paper are to: 1) systematically review the literature for HF-related research in health care, 2) evaluate the impact to nursing areas of interest and 3) recommend future research directions.

BACKGROUND

Human factors is a broad term for a set of related concepts about human interactions with tools in associated environments. Figure 1 depicts these concepts and their relationships.2 All HF-related concepts consider human needs, abilities, and limitations, including cognitive aspects, and assert an axiom of user-centered design.3 4 Human factors encompasses the design, use, and evaluation of tools in a broad sense to a wide variety of tools – for instance the design and use of an airplane cockpit, the design of a hammer to fit the female human hand or incorporating known concepts about human memory and attention to improve work systems for successful sponge counts in an operating room. Ergonomics emphasizes physical attributes and designs of tools such as the size of lettering on IV pump so that labels are viewable from across the patient’s bed, the design of a computer mouse or the layout of equipment in an intensive care unit to promote optimal workflow. Human-computer interaction focuses on computers and applications for humans while its closely related concept, usability, stresses the design, interaction and evaluation of both devices and computer applications by examining specific tasks and interaction outcomes within particular contexts. Examples include the design of an electronic medication administration record for multidisciplinary use, and its subsequent re-design for specific tasks unique to an emergency department setting. Human-computer interaction can also include the design of software to support a group of users working on a shared document or social sanctions from inappropriate blogs among a group of clinicians discussing cardiomyopathy research.

Figure 1
The Relationship of Human Factors Concepts

The unique methods available from the HF domain allow researchers to elicit critical thought processes (e.g., cognitive task analysis), work methods (e.g., naturalistic observation) and/or tasks that are crucially important for the design of tools, devices and information systems. Research methods such as ethnographic and qualitative techniques are also useful in defining key user requirements for tools and evaluating existing tools for effectiveness.

Most important, the commonly held goals of human-factors are to improve the effectiveness, efficiency, and satisfaction of humans interacting with tools (see Figure 2).5 Effectiveness includes the usefulness of a tool to complete work (tasks), and the safety of the tool. Examples of efficiency include productivity such as the time to complete specific tasks, the number of clicks to perform tasks, the costs of the tools and/or the amount of training time needed for users to learn a software application. Satisfaction can include the perception of any aspect of the tool and typically includes perceptions about workload or the effectiveness of the specific design.

Figure 2
Human Factors Research Goals

In this review, we focus on the design and evaluation of user interfaces for clinical technology. Optimal technology design is vital to health care because the work and associated tools can be life-critical. For example, in a tragic event, faulty software design for controls in a radiation machine caused a patient to scream in pain during treatment and later die because of a radiation overdose.6 Zhang, et al.7 and Graham, et al.8 both outline serious usability problems with IV pumps, including issues that are likely to cause medical errors. Given the considerable impact of HF in health care, we examined available research about the design of clinical technology organized using the goals of HF: design effectiveness, efficiency and satisfaction.

METHODS

Formal methods were used to perform a systematic review and assure a thorough search and retrieval process. Procedures included article relevance assessments, data extraction and data analysis.9 Poor quality studies were not eliminated, as is common in many systematic reviews, because our goal was to describe the available HF research in health care. The years 1980–2009 were included. Substantial technology changes for devices and information systems since 1980 would make earlier references not pertinent. Criteria for inclusion were: Peer review publications in English; stated research findings; any study design or method from any country; analyses of medical devices, tools, user interfaces, clinical information systems, electronic health records in healthcare environments; any user including health providers or patients. Excluded articles were studies about: ergonomics (e.g. Cumulative Trauma Disorders, occupational medicine); in conference proceedings; about medical transcription devices; descriptions of human factors-related concepts without research findings; usability analyses in non-healthcare settings, designs solely for patients and descriptions of work activities or error analyses.

Extensive literature searches were conducted using the research databases Cumulative Index of Nursing and Allied Health Literature (CINAHL), Ovid MEDLINE, PsycINFO, INSPEC, and the EBM Reviews: Health Technology Assessment Database (CLHTA) from 1980 to 2009. Key search terms were: (Human Computer Interaction or HCI) and (Human factors or Usability) and (health$ or health care or medical) and (nurs$). Reference lists of publications were checked for any additional references. Authors independently reviewed citations for relevancy and applied the relevancy criteria; any questionably relevant articles were discussed until consensus was reached. The authors focused on technology targeted to clinicians only.

RESULTS

The search criteria yielded a total of 11,916 articles; delimiting articles to those with health$ or health care or medical terms resulted in 2,234 articles; again delimiting this search to manuscripts with a nursing emphasis resulted in 215 articles. The abstracts from this set of 215 articles were reviewed; 34 articles met the relevance criteria. These articles are summarized in Table 1 with all usability findings. Authors of 18/34 articles examined 17 different application or screen design interfaces, authors of 6/34 studies evaluated 5 different graphical interfaces, 5/34 different remote/telemedicine systems and 5/34 different medical device user interfaces.

Table 1
Types of User Interfaces By Major Findings Across Combined Effectiveness, Efficiency and Satisfaction Categories

Authors included multiple outcome variables; these details are found in Table 2 divided into 50 separate studies. Studies were then classified into three categories based upon goals for human factors research: effectiveness (24/50), efficiency (10/50) and satisfaction (16/50). The study design and aims, sample, setting, methods, and findings were extracted from each relevant article.

Table 2
Evidence Table of Clinical Technology Design (User Interface) Studies

Evaluations in Effectiveness

Authors of 24 studies evaluated effectiveness aspects of user interfaces. Effectiveness is the usefulness and safety of an interface in completing a task (See Figure 2). Authors of seven studies illustrate the variability of types of software being tested, for example, the usefulness of software that automatically created a family pedigree diagram from family history data, a mobile medical emergency services medical record for paramedics, a laboratory procedures system, and a nurse practitioners outcomes database with graphics.1013 Researchers have found that users were more successful searching for information on homegrown interfaces versus proprietary ones, users prefer systems that reduce cognitive effort, and that complex queries could be answered more successfully with graphical interfaces vs paper.10, 1416 In device/system reviews using heuristics, researchers also found severe usability problems caused by limited information visibility and faulty data synchronization, possibly leading to medical errors. Also, limited system flexibility, poor navigation systems caused users to get lost in the application and confusion about what labels mean led to potential for patient harm.11, 17 To avoid some of these circumstances early in the design process researchers recommend including users in development lifecycle to identify users needs and expectations of design requirements.18, 19

Authors of four studies examined the effectiveness of graphical interface designs on clinician decision making for stroke patients, ventilator-dependent patients, patients requiring hemodynamic monitoring and the safety of using a novel electronic medication administration record. Graphical designs improved initiating treatments, determining needed medications, and detecting patients’ deviations from normal physiological parameters; visual cognitive learning styles (versus verbal) resulted in better ability for clinicians to keep vital signs within a target range with advanced physiologic monitoring interfaces.2022 However, nurses’ medication accuracy was low for medication tasks that required them to scroll beyond the current field of view in a new graphical medication record, despite substantial training with the interface.23

The authors of two studies evaluated the usability of IV pumps and judged their compliance with recognized design guidelines called heuristics. Authors found heuristic violations or non-compliance with recommended design guidelines for two different 1-channel volumetric IV pumps from two different vendors,7 and one 3-channel pump commonly used in the ICU setting.8 The vendors and model numbers were not provided. The heuristic for consistency was violated most frequently. Inconsistencies do not allow users to determine the clear meaning of interface elements such as labels. For example, one pump button labeled “off” for one infusion channel could be confused with the pump “stop” button. Authors found catastrophic usability errors in IV pumps. In one study, a pump adjustment was hidden on the rear of pump handle; this location may cause an inadvertent setting change when a user is just moving the pump. More important, the location makes the button hard to locate to readjust the pump back to normal.7

Two studies included evaluation of patient controlled analgesia (PCA) pumps. In these studies complex programming sequences and multiple user modes increased mental workload of nurses; a redesign of the PCA interfaces improved cognitive loads and potential errors in programming the devices.24, 25 Another set of authors caution that devices can be very confusing when they look like a familiar object (a pen) but behave differently (the cap on the pen was a power button).26 These kinds of designs can result in increased cognitive burden, training and/or redesign.

Authors of remote/mobile device studies examined telemedicine in home health environments,27 electronic diabetes management programs2830 and a hand held electronic medical record for physicians.31 Sound and visual quality during patient assessments interfered with effective assessments. A mismatch between manual nursing assessment practices and an early telemedicine device design caused delays and difficulties in completing care assessments.

Two different clinical decision support systems were evaluated, a cancer detection system and clinical reminders for HIV patients.32, 33 Researchers assessed the ability of a system to accurately diagnose and inform clinicians. In the HIV reminder study, researchers uncovered barriers that reduced effectiveness of the reminders: workload, time required to document information about the reminder and duplication paper form systems, among others.

One set of authors evaluated a commercial electronic health record in a clinical setting.34 Researchers identified a total of 134 usability issues; 13 (10%) were potentially severe. For example, long, multi-level screens were confusing to use during admission documentation procedures while clinicians simultaneously obtained a medical history from patients; subsequently, clinical documents in the EHR had to be reconfigured by the vendor before use.

Evaluations of Efficiency

Efficiency aspects (Figure 2) examine productivity (time), costs, efficiency errors, and learnability (defined as the capability of a software product in enabling a user to learn how to use it). Accuracy is also important here because inaccuracy in keystrokes takes more time, impacting user costs and productivity. Five of the 10 efficiency studies were evaluations of graphical interfaces (5/10). For example, researchers found that a 3-fold increase in information density on screens allowed users to be twice as fast while not impacting accuracy. Users do not have to page between screens to find data.35 Graphical user interface design compared to text or paper systems also allowed clinician users to be twice as fast and more accurate in keystrokes.15,3638

New user interfaces enhanced users’ performance. Researchers demonstrated that improved designs for PCA pumps allowed users to avoid complex programming sequences reducing the time and errors.24, 25 Design can impact search times for clinical information. One study compared search times for patient care guidelines among different displays and found that users spent nearly twice the search time with one display due to poor document format and organization in the interface.14

Evaluations of Satisfaction

User satisfaction is measured by perceived effectiveness or perceived efficiency of the user interface. Satisfaction was measured in 16/50 studies; new interfaces involving user input for graphical displays and redesigned interfaces of all kinds had higher satisfaction ratings. User satisfaction was measured in studies that evaluated new types of software for clinical processes like medication administration, order entry, or documenting on transplant patients (See Table 1). Usability problems that negatively affect user satisfaction with interfaces included system inflexibility, poor navigation, poor information quality, lack of control of the system, limited visibility of system status.39, 40 Researchers found that users want interfaces that are intuitive, formats that allow visible data input; e.g., for birthdates (e.g. MM,DD,YYYY), and include consolidated information with high level information presented first.

Clinicians want technology that is easier to operate and is easy to understand, such as alarms with fewer hierarchical levels.22, 41 To obtain favorable user satisfaction results, technologically savvy clinicians also want an option to customize the interface for their own use, for example, some clinicians want to dial in their target ranges on specific measurement levels for their patients.28

DISCUSSION

This systematic review outlines the existing research for the design of clinical technology across its outcomes of effectiveness, efficiency and satisfaction. The majority of current studies evaluated effectiveness aspects of clinical technology interfaces. Studies about interface efficiency were fewest in number. Of course, a blend of these goals would be optimal to assure efficient and effective clinical technology design.

Current Research on Technology Design

Current research ranges across a myriad of technology interface types. The types of interfaces examined to date have no apparent pattern nor have they been assessed with any obvious rationale such as their frequency of use in clinical settings.

Although usability studies have not yet penetrated health care widely, researchers have discovered elements of design worth attention. For example, dense screens are faster for nurses’ information detection than and still as accurate as less dense screens. Thus, designers will want to include dense screens in systems so that clinicians avoid unnecessary movement between screens to search for information. The caveat is that dense screens need to include pertinent information, which means that designers will need to understand how clinicians make decisions and with what information. More careful attention should be paid to attention-grabbing methods for data located outside nurses’ field of view as it can easily be missed even when nurses are trained on an application.

Graphical designs facilitate both efficiency and effectiveness measures. These designs improve time to treatment, detecting physiologic parameter deviations and time to complete a wide variety of tasks (e.g., orders, lab procedures, searching for clinical data). A graphical design is especially important for tasks requiring navigation across applications or screens in a system and can improve performance as much as two-fold.36

Researchers overall found improvements in redesigns of older interfaces and with iterative designs created in combination with user testing. Initially, readers might ascribe this finding to a publication bias; however, its prevalence across so many studies can also confirm the validity of the usability axioms of user-centered design and the value of usability testing.

Device evaluations and the sole assessment of an active EHR uncovered serious usability issues such as safe programming of PCA, IV pumps and designs that interfered with critical processes such as documenting an admission history. Serious usability issues can be alarming, for instance, nurses were able to program a pump to give an inadvertent overdose without an alarm or warning. The Food and Drug Administration (FDA) currently requires usability testing for devices; however, the seriousness of the findings in the handful of studies here suggests that the FDA expand usability testing, that facilities assess the usability of devices as part of their purchasing processes and that a department such as quality improvement evaluate devices for their safety in their institutions, especially older ones.

Future Research Directions

Recommendations for future research are made in these areas: a) Expand the types, settings and participants for usability testing, b) Develop integrated displays, c) Expand outcome variables in usability studies.

Expand the Types of Evaluations, Settings and Participants

Types of Evaluations

The types of evaluated devices are limited to date. The interfaces for a handful of devices were formally evaluated, including two IV and two PCA pumps. A systematic method for evaluation is needed such as assessing devices based upon their prevalence and use in clinical settings. Obviously, many more devices exist in the clinical setting than were examined to date. Just in an ICU setting alone, numerous physiological monitors and devices (invasive and noninvasive) have an array of alarms with distinctive tones, blinking lights of different colors and shapes, all demanding attention.

Common tools such as IV pumps and the one evaluated EHR had serious usability violations. To ensure safe practice, usability evaluations of clinical technology tools need to be greatly expanded to alleviate potential hazards. Even more important, usability studies are critically needed to examine the cognitive burden, errors and workflow issues that may exist across devices in clinical settings. How nurses learn, remember and use the myriad of devices is worthy of more investigation. How to design technology to work symbiotically across tools is needed. A national database is needed for known devices assessments particularly for older models with known safety issues.

The Institute of Medicine42 (IOM) encourages the adoption of health information technology as one solution to medical errors. Yet, only one set of authors evaluated an active EHR. HIMSS Analytics reported that over 1,300 US hospitals have at least computerized clinical documentation in place.43 With the impetus to increase EHR implementations, increased health information technology funding in 2009, and the increasing infiltration of EHRs into diverse sites, usability assessments of commercial EHRs are needed to better understand the impacts of these products. Although some vendors incorporate prototyping and usability testing into their development cycles, this practice is not yet widespread. EHR components should be rigorously and iteratively tested using human factors principles by vendors, representative end users and HF experts to assure adequate design before installation.

The majority of tested technologies are those in clinical practice. The findings from these studies are striking, illustrating sources of potential error. Technology used in educational and administrative functions is under-represented. Expanding usability testing into these arenas would be welcomed. HF evaluations of curricular software, especially commercially available products, is needed. Usability evaluations would provide important details about successes and failures for others as they plan to implement new models of learning. Optimal interfaces for nurse executives and administrators are another area for promising research.

Evaluation Settings

The majority of current research settings are laboratories or simulated clinical settings. In the future, studies in naturalistic settings are highly encouraged. These kinds of settings would allow researchers to examine the role of interruptions, competing demands and other typical work issues within the context of their particular technology design. Naturalistic settings would provide researchers with new knowledge and understanding about how technologies are actually used in clinical practice versus artificial settings. Understanding work-arounds nurses create and competing demands would be illuminating.

Participants

Interdisciplinary teams participated in 2 device studies; interface assessments included 11 interdisciplinary teams. The IV pump studies and two graphical interfaces studies used psychology studies participants. Actual clinical users should be included in the future across types of nurses including nurse anesthetists, seemingly absent from usability studies to date.

More studies are needed to emulate the kinds of teamwork that occurs with clinical technology in sites. For instance, nurses and pharmacists are under-represented in evaluations of the impact of computerized provider order entry despite the fact that they are both integral to the orders management process and safe execution of orders.44

Develop Integrated Displays

Computerized support is needed to help nurses integrate information across devices and EHR applications. These integrated data summaries would display pertinent patient data, such as at change of shift. Currently, nurses must integrate data and information from devices and EHRs themselves, typically by remembering data.45 Nurses gather data from various sources organize the information and apply knowledge to recognize untoward trends or symptoms. Clinicians currently complain that the “big picture” of the patient is difficult to obtain with the sea of data in contemporary EHRs. A recent report from the Academies Press46 recognized the urgent need for better cognitive support from EHRs, including help integrating data.

Expand Outcome Variables in Usability Studies

The most commonly examined outcome variables were user satisfaction, heuristic violations, time and errors. User satisfaction was an outcome variable in 16 studies. Yet, user satisfaction provides only a partial insight into technology design. A better assessment would allow investigators to understand why a design improves satisfaction. Plus, researchers nearly all claim high user satisfaction, although this finding may be due to a publication bias. Other variables such as performance measures (time, accuracy) and aspects of decision-making (correct treatment, detecting adverse events, and patient safety errors) may be more telling aspects of usability evaluations. An expanded list of variables is available elsewhere.2 Thoughtfully chosen outcome variables should be mainstays of future usability research. EHRs in particular should be evaluated from a multi-modal perspective to assess both efficiency and effectiveness aspects.

Last, the gap between research and practice needs to be bridged. Interface evaluation and products from research proved useful and productive. Yet, research products often remain fixed in the research arena. In the future, bridging this gap should be part of the researcher’s agenda.

Limitations

This review included literature available in refereed journals. Other relevant studies may be available in dissertations, reports and unpublished venues. In the future, other authors may wish to examine studies from conference proceedings and in other languages besides English. Synthesizing results across this myriad of studies, variables, devices, methods and participants was particularly challenging. Additional insights are possible in this body of work.

CONCLUSION

Usability analyses are critically needed in clinical care settings to evaluate the myriad of equipment, monitors, and software used by health care providers to care for patients. These kinds of analyses provide necessary information about the cognitive workload, workflow changes, and errors occurring from poor technology design. More examinations that include unstudied nursing specialties and settings are needed to provide rich, detailed accounts of experiences with clinical technology. More interdisciplinary work is needed to ensure that clinical systems are designed for maximum benefit of all stakeholders, to increase understanding of information needs and requirements across settings, and to understand shared user performance with devices. Research needs to be conducted in actual practice settings, rural and community settings to outline excellent and less optimal technology designs. Expanding this area of research would enable a better fit between nurses and technology to reduce errors and increase nurses’ productivity.

Acknowledgments

The project was supported by grant number K08HS016862 from the Agency for Healthcare Research and Quality (Alexander, PI). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

Contributor Information

Greg Alexander PhD, University of Missouri, Sinclair School of Nursing S415, Columbia MO 65211, Phone: 573-882-9346, Fax: 573-884-4544.

Nancy Staggers, Informatics Program, College of Nursing, 10 S. 2000 E, University of Utah, Salt Lake City, UT 84108, Phone: 801.699.0112, Fax: 801.581.4297.

Reference List

1. Kohn L, Corrigan J, Donaldson M. To Err is Human. Washington DC: National Academies Press; 1999.
2. Staggers N. Human-computer interaction. In: Englebardt S, Nelson R, editors. Information Technology in Health Care: An Interdisciplinary Approach. Harcourt Health Science Company; 2001. pp. 321–45.
3. Dix A, Finlay JE, Abowd GD, Beale R. Human-Computer Interaction. 3. Essex England: Prentice Hall; 2004.
4. Carayon P. Handbook of Human Factors and Ergonomics in Health Care and Patient Safety. Mahwah, NJ: Lawrence Erlbaum Associates; 2007. Handbook of Human Factors and Ergonomics in Health Care and Patient Safety; pp. 3–5.
5. TIGER (Technology Informatics Guiding Education Reform) The TIGER Initiative: Collaborating to Integrate Evidence and Informatics into Nursing Practice and Education: An Executive Summary. 2009. [Accessed May 12, 2009]. http://wwwtigersummitcom/uploads/TIGER_Collaborative_Exec_Summary_040509pdf.
6. Sears A, Jacko J. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Second Edition (Human Factors and Ergonomics) 2. New York: Taylor and Francis Group; 2008.
7. Zhang J, Johnson TR, Patel VL, Paige DL, Kubose T. Using usability heuristics to evaluate patient safety of medical devices. Journal of Biomedical Informatics. 2003;36:23–30. [PubMed]
8. Graham MJ, Kubose TK, Jordan D, Zhang J, Johnson TR, Patel VL. Heuristic evaluation of infusion pumps: Implications for patient safety in Intensive Care Units. International Journal of Medical Informatics. 2004;73:771–9. [PubMed]
9. Cochrane. The Cochrane Manual. Cochrane Collaboration; 2006.
10. Johnson CM, Johnson TR, Zhang J. A user-centered framework for redesigning health care interfaces. Journal of Biomedical Informatics. 2005;38:75–87. [PubMed]
11. Tang Z, Johnson TR, Tindall RD, Zhang J. Applying heuristic evaluation to improve the usability of a telemedicine system. Telemedicine and e-Health. 2006;12(1):24–35. [PubMed]
12. Terazzi A, Giordano A, Minuco G. How can usability measurement affect the re-engineering process of clinical software procedures? International Journal of Medical Informatics. 1998;52:229–34. [PubMed]
13. Hortman PA, Thompson CB. Evaluation of user interface satisfaction of a clinical outcomes database. CIN: Computer, Informatics, Nursing. 2005;23(6):301–7. [PubMed]
14. Wallace CJ, Bigelow S, Xu X, Elstein L. Usability of text-based, electronic patient care guidelines. CIN: Computer, Informatics, Nursing. 2007;25(1):39–44. [PubMed]
15. Martins SB, Shahar Y, Goren-Bar D, et al. Evaluation of an architecture for intelligent query and exploration of time oriented clinical data. Artificial Intelligence in Medicine. 2008;43:17–34. [PMC free article] [PubMed]
16. Horsky J, Kaufman DR, Oppenheim MI, Patel VL. A framework for analyzing the cognitive complexity of computer-assisted clinical ordering. Journal of Biomedical Informatics. 2003;36:4–22. [PubMed]
17. Peute LWP, Jaspers MWM. The significance of a usability evaluation of an emerging lab order entry system. International Journal of Medical Informatics. 2007;76:157–68. [PubMed]
18. Alberdi E, Gilhooly K, Hunter J, et al. Computerisation and decision making in neonatal intensive care: A cognitive engineering investigation. Journal of Clinical Monitoring. 2000;16:85–94. [PubMed]
19. Allen M, Currie LM, Bakken S, Patel V, Cimino JJ, Patel VL. Heuristic evaluation of paper-based Web pages: A simplified inspection usability methodology. Journal of Biomedical Informatics. 2006;39:412–23. [PubMed]
20. Effken JA, Kim NG, Shaw RE. Making the constraints visible: Testing the ecological approach to interface design. Ergonomics. 1997;40(1):1–27. [PubMed]
21. Effken JA, Doyle M. Interface design and cognitive style in learning an instructional computer simulation. Computers in Nursing. 2001;19(4):164–71. [PubMed]
22. Liu Y, Osvalder AL. Usability evaluation of a GUI prototype for a ventilator machine. Journal of Clinical Monitoring and Computing. 2004;18:365–72. [PubMed]
23. Staggers N, Kobus D, Brown C. Nurses evaluations of a novel design for an electronic medication administration record. CIN: Computer, Informatics, Nursing. 2007;25(2):67–75. [PubMed]
24. Lin L, Vicente KJ, Doyle DJ. Patient safety, potential adverse drug events, and medical device design: a human factors engineering approach. Journal of Biomedical Informatics. 2001 August;34(4):274–84. [PubMed]
25. Lin L, Isla R, Doniz K, Harkness H, Vicente KJ, Doyle DJ. Applying human factors to the design of medical equipment: patient-controlled analgesia. Journal of Clinical Monitoring & Computing. 1998 May;14(4):253–63. [PubMed]
26. Despont-Gros C, Rutschmann O, Geissbuhler A, Lovis C. Acceptance and cognitive load in a clinical setting of a novel device allowing natural real-time data acquisition. International Journal of Medical Informatics. 2007;76:850–5. [PubMed]
27. Lindberg C. Implementation of in-home telemedicine in rural Kansas: Answering an elderly patient's needs. Journal of the American Medical Informatics Association. 1997;4:14–7. [PMC free article] [PubMed]
28. Fonda SJ, Paulsen CA, Perkins J, Kedziora RJ, Rodbard D, Bursell SE. Usability test of an internet-based informatics tool for diabetes care providers: The Comprehensive Diabetes Management Program. Diabetes Technology & Therapeutics. 2008;10(1):16–24. [PubMed]
29. Chaikoolvatana A, Haddawy P. The development of a computer based learning (CBL) program in diabetes management. Journal of the Medical Association of Thailand. 2006;89(10):1742–8. [PubMed]
30. Hun Yoo S, Chul Yoon W. Modeling users' task performance on the mobile device: PC convergence system. Interacting with Computers. 2006;18:1084–100.
31. Wu RC, Orr MS, Chignell M, Straus SE. Usability of a mobile electronic medical record prototype: a verbal protocol analysis. Informatics for Health & Social Care. 2008;33(2):139–49. [PubMed]
32. Fuchs J, Heller I, Topilsky M, Inbar M. CaDet, a computer-based clinical decision support system for early cancer detection. Cancer Detection and Prevention. 1999;23(1):78–87. [PubMed]
33. Patterson ES, Nguyen AD, Halloran JP, Asch SM. Human factors barriers to the effective use of ten HIV clinical reminders. Journal of the American Medical Informatics Association. 2004 January;11(1):50–9. [PMC free article] [PubMed]
34. Edwards PJ, Moloney KP, Jacko JA, Sainfort F. Evaluating usability of a commercial electronic health record: A case study. International Journal of Human-Computer Studies. 2008;66:718–28.
35. Staggers N, Mills ME. Nurse-Computer interaction: Staff performance outcomes. Nursing Research. 1994;43(3):144–50. [PubMed]
36. Staggers N, Kobus D. Comparing response time, errors, and satisfaction between text-based and graphical user interfaces during nursing order tasks 135. Journal of the American Medical Informatics Association. 2000 March;7(2):164–76. [PMC free article] [PubMed]
37. Mills EM, Staggers N. Nurse computer performance: Considerations for the nurse administrator. Journal of Nurisng Administration. 1994;24(11):30–5. [PubMed]
38. Lamy JB, Venot A, Bar-Hen A, Ouvrard P, Duclos C. Design of a graphical and interactive interface for facilitating access to drug contraindications, cautions for use, interactions and adverse effects. BMC Medical Informatics and Decision Making. 2008;8(21) [PMC free article] [PubMed]
39. Narasimhadevara A, Radhadrishnan T, Leung B, Jayakumar R. On designing a usable interactive system to support transplant nursing. Journal of Biomedical Informatics. 2008;41:137–51. [PubMed]
40. van der Meijden MJ, Solen I, Hasman A, Troost J, Tange HJ. Two patient care information systems in the same hospital: Beyond technical aspects. Methods of Information in Medicine. 2003;42:423–7. [PubMed]
41. Lin YH, Jan IC, Ko P, Chen YY, Wong JM, Jan GJ. A wireless PDA-based physiological monitoring system for patient transport. IEEE Transactions of Information Technology in Biomedicine. 2004;8(4):439–47. [PubMed]
42. Institute of Medicine. Key capabilities of an electronic health record system. 2003. retrieved from http://wwwnapedu/books/NI000427/html/ Available at: URL: http://www.nap.edu/books/NI000427/html/
43. HIMSS. HIMSS Analytics. 2009. [Accessed January 19, 2009]. http://www.himssanalytics.org/ Available at: URL: http://www.himssanalytics.org/
44. Weir C, Staggers N, Phansalkar S. The state of the evidence for computerized provider order entry: A systematic review and analysis of the quality of the literature. International Journal of Medical Informatics. 2009;78(6):365–74. [PubMed]
45. Staggers N, Jennings BM. The content and context of change of shift report on medical and surgical units. Journal of Nursing Administration. 2009 In press. [PubMed]
46. Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions. Washington DC: National Academies Press; 2009. [PubMed]