PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jopHomeThis ArticleASCO JOPSearchSubmitASCO JOP Homepage
 
J Oncol Pract. 2010 May; 6(3): 125–126.
PMCID: PMC2868636

Commentary: Electronic Health Records and Human Performance

The usability of electronic health records (EHRs) has received increased attention, because there is mounting evidence that usability is one of the key barriers to adoption of EHRs.1,2 Every day, health care workers face many usability issues with EHRs:

  • Inefficient workflows that fail to match clinical processes
  • Confusing popup messages that can sometimes be ignored and sometimes not
  • Poorly designed screens overloaded with data obfuscating potential critical issues
  • Alert fatigue (both visual and audio)
  • Frustration with too many clicks during common tasks.

All this contributes to a weariness and irritation that ultimately affects patient care. But what does it mean to have a usable application? And why are EHRs difficult to use? In this issue of Journal of Oncology Practice, Corrao et al3 examine the usability of an oncology EHR through a series of tactics known as heuristic evaluation, a modified form of usability testing, and a survey. The authors were successful in identifying items that hindered usability and should have been resolved before implementation. The article also illustrates a subtle but profound terminology issue. Usability is often reported as a property of an application: “That application is not very usable.” However, usability can also be viewed as a process, as in conducting usability evaluations. Exploring both senses of the term will help us get at the questions above.

Usability As Property

The National Institute of Standards and Technology defines usability as “the effectiveness, efficiency, and satisfaction with which the intended users can achieve their tasks in the intended context of product use.”4 This is more or less a property definition. The usability of EHRs, like other software, can be measured and compared in terms of these properties. For instance, we can measure the quality, frequency, and speed with which a prescription is written. By comparing measures, we can say that EHR A is more usable than EHR B. Like medicine, usability is more nuanced than this. There are no purely objective measures of usability, although a number of metrics aligned with usability exist. Ultimately, these measurements reflect the interactions among efficiency, user satisfaction, and design factors of the user interface.

Measuring usability properties is a diagnostic process; it tells us where things are right and wrong (eg, workflow efficiency). These diagnoses lead to interventions (eg, removal of unnecessary steps) intended to increase the perceived usability of the application. However, with an existing EHR, simply measuring usability properties and making incremental changes fails to address a more fundamental question: How can we make EHRs more usable in the first place? This is where usability as process comes into play.

Usability As Process

Incorporating improved human performance into an application is known as user-centered design (UCD).5 UCD is a philosophy that has been around for several decades and has the following principles:

  • Understand user needs, workflows, and work environments
  • Engage users early and often
  • Set user performance objectives
  • Design the user interface from known human behavioral principles and familiar user interface models
  • Conduct usability tests to measure how well the interface meets the needs
  • Adapt the design and iteratively test with users until performance objectives are met.

Effective user interface design starts with understanding and involving users early on. The UCD process ends only when the user performance objectives are met through reproducible usability testing. Much of the UCD process is conducted before a single line of application code is written. Software development processes that adhere to UCD principles produce more usable applications than those that do not. Most enterprise software companies use such UCD practices; they also employ dozens of user experience researchers. Anecdotally, only a small handful of EHR vendors have any dedicated usability staff; most have none.

There is a danger in implementing a “lite version” of UCD (or as some have called it, “voodoo usability”). Distilled versions of human performance engineering are widely practiced under the label of discount usability. Discount usability is to human performance engineering as first aid is to professional medical care. Expecting first aid to substitute for a thorough exam, diagnosis, and treatment is simply not realistic. The practice of usability is not always formal; it does not need to be. Many usability professionals practice discount usability techniques frequently but only when appropriate.6 Discount usability serves a valuable purpose, but it is not a substitute for a proper user-centered design program.

Usable designs do not come about in a flash of brilliance. Instead, they are based on systematic analysis of end-user needs, workflow development, application of design guidelines and standards, and user testing, driven by dedication to create the most usable interface possible.

Why Are EHRs Difficult to Use?

Many EHRs remain painful for users. There are many factors, but two common design errors include:

Failure to use human-factors design principles.

In cognitive psychology, as in medicine, basic research is often transformed into clinical practice. Human-factors psychologists have transformed what we know about human behavior vis-à-vis technology into engineering documents that can be used to describe how a user interface should be designed. How the eye scans a page, how the brain processes a message in a dialog box, and how the keyboard is laid out can be improved based on known research on human performance.7,8 These principles of human performance are accepted and practiced in some disciplines (eg, nuclear power, aviation, medical devices) but remain virtually unknown in medical informatics. Thus, EHRs could be made easier if known human-factors principles were applied.

Lack of consistency within and across applications.

The EHR is but one application in an ecosystem. Many physicians deal with two or more EHRs throughout the day as well as a myriad of other applications and devices. Many of these applications often have conflicting workflows, icons, labels, commands, and controls. These inconsistencies magnify the likelihood of error. EHRs could be made easier if there were conformance to user interface standards. Standards enable positive transfer of training, reduce learning time, and engender user confidence and improved user performance. User interface standards for medical applications derived from principle and practice exist but are not widely employed (eg, the Microsoft Health Common User Interface [http://www.mscui.net]).

No one sets out to create an EHR that is difficult to use. Yet most EHR developers are unaware of or do not understand the importance of human-factors design principles and applications of user interface standards. Patient safety is compromised when a programmer innocently selects a drop-down field length that will not display full drug names. Developers who violate principles of human performance run the risk of delivering systems that result in poor user performance, increased error, and low user satisfaction.

Conclusion

EHR vendors are often caught up in the kitchen-sink mentality: The more features an EHR application has, the better. Certification and rating systems that only focus on functionality encourage this. Lately, the spotlight has shifted to usability. The Agency for Healthcare Research and Quality has sponsored research by recognized usability experts.9 The National Institute of Standards and Technology and the Office of the National Coordinator for Health Information Technology are also moving to develop a usability framework. This trend should continue.

The article by Corrao et al3 demonstrates that measuring usability can provide insights that will facilitate incremental improvements; few would dispute this. However, an increasing emphasis should be placed on making applications more usable and therefore safer from the outset. This is particularly critical for oncology EHRs, because a majority of these products are used in chemotherapy prescribing and delivery, a process that is associated with high potential for patient harm if safe practices are not followed. Human performance with systems is not arbitrary; high usability can be engineered into the product from the outset. Rather than diagnosing what is wrong with EHRs, we could improve the fitness of EHRs by embracing user-centered design processes, human-factors knowledge, and user interface standards.

Author's Disclosures of Potential Conflicts of Interest

Although all authors completed the disclosure declaration, the following author(s) indicated a financial or other interest that is relevant to the subject matter under consideration in this article. Certain relationships marked with a “U” are those for which no compensation was received; those relationships marked with a “C” were compensated. For a detailed description of the disclosure categories, or for more information about ASCO's conflict of interest policy, please refer to the Author Disclosure Declaration and the Disclosures of Potential Conflicts of Interest section in Information for Contributors.

Employment or Leadership Position: Robert M. Schumacher, User Centric (C) Consultant or Advisory Role: Robert M. Schumacher, User Centric (C) Stock Ownership: Robert M. Schumacher, User Centric Honoraria: None Research Funding: None Expert Testimony: None Other Remuneration: None

References

1. HIMSS EHR Usability Task Force. Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating, 2009. http://www.himss.org/content/files/HIMSS_DefiningandTestingEMRUsability.pdf.
2. Gans D, Kralewski J, Hammons T, et al. Medical groups' adoption of electronic health records and information systems. Health Aff (Millwood) 2005;24:1323–1333. [PubMed]
3. Corrao NJ, Robinson AG, Swiernik MA, et al. Importance of testing for usability when selecting and implementing an electronic health or medical record system. J Oncol Pract. 2010;6:120–124. [PMC free article] [PubMed]
4. National Institute of Standards and Technology. NISTIR 7432: Common Industry Specification for Usability—Requirements, 2007. http://zing.ncsl.nist.gov/iusr/documents/CISU-R-IR7432.pdf.
5. Norman DA, Draper SW, editors. Hillsdale, NJ: Lawrence Erlbaum Associates; 1986. User Centered System Design: New Perspectives on Human-Computer Interaction.
6. Rohrer C. When to use which user experience research method, 2008. http://www.useit.com/alertbox/user-research-methods.html.
7. Wickens CD, Holland J. Engineering Psychology and Human Performance. ed 3. Upper Saddle River, NJ: Prentice Hall; 2000.
8. Card S, Moran T, Newell A. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983. The Psychology of Human-Computer Interaction.
9. Armijo D, McDonnell C, Werner K. Rockville, MD: Agency for Healthcare Research and Quality; 2009. Electronic Health Record Usability: Interface Design Considerations—AHRQ Publication No. 09(10)-0091-2-EF.

Articles from Journal of Oncology Practice are provided here courtesy of American Society of Clinical Oncology