|Home | About | Journals | Submit | Contact Us | Français|
The usability of electronic health records (EHRs) has received increased attention, because there is mounting evidence that usability is one of the key barriers to adoption of EHRs.1,2 Every day, health care workers face many usability issues with EHRs:
All this contributes to a weariness and irritation that ultimately affects patient care. But what does it mean to have a usable application? And why are EHRs difficult to use? In this issue of Journal of Oncology Practice, Corrao et al3 examine the usability of an oncology EHR through a series of tactics known as heuristic evaluation, a modified form of usability testing, and a survey. The authors were successful in identifying items that hindered usability and should have been resolved before implementation. The article also illustrates a subtle but profound terminology issue. Usability is often reported as a property of an application: “That application is not very usable.” However, usability can also be viewed as a process, as in conducting usability evaluations. Exploring both senses of the term will help us get at the questions above.
The National Institute of Standards and Technology defines usability as “the effectiveness, efficiency, and satisfaction with which the intended users can achieve their tasks in the intended context of product use.”4 This is more or less a property definition. The usability of EHRs, like other software, can be measured and compared in terms of these properties. For instance, we can measure the quality, frequency, and speed with which a prescription is written. By comparing measures, we can say that EHR A is more usable than EHR B. Like medicine, usability is more nuanced than this. There are no purely objective measures of usability, although a number of metrics aligned with usability exist. Ultimately, these measurements reflect the interactions among efficiency, user satisfaction, and design factors of the user interface.
Measuring usability properties is a diagnostic process; it tells us where things are right and wrong (eg, workflow efficiency). These diagnoses lead to interventions (eg, removal of unnecessary steps) intended to increase the perceived usability of the application. However, with an existing EHR, simply measuring usability properties and making incremental changes fails to address a more fundamental question: How can we make EHRs more usable in the first place? This is where usability as process comes into play.
Incorporating improved human performance into an application is known as user-centered design (UCD).5 UCD is a philosophy that has been around for several decades and has the following principles:
Effective user interface design starts with understanding and involving users early on. The UCD process ends only when the user performance objectives are met through reproducible usability testing. Much of the UCD process is conducted before a single line of application code is written. Software development processes that adhere to UCD principles produce more usable applications than those that do not. Most enterprise software companies use such UCD practices; they also employ dozens of user experience researchers. Anecdotally, only a small handful of EHR vendors have any dedicated usability staff; most have none.
There is a danger in implementing a “lite version” of UCD (or as some have called it, “voodoo usability”). Distilled versions of human performance engineering are widely practiced under the label of discount usability. Discount usability is to human performance engineering as first aid is to professional medical care. Expecting first aid to substitute for a thorough exam, diagnosis, and treatment is simply not realistic. The practice of usability is not always formal; it does not need to be. Many usability professionals practice discount usability techniques frequently but only when appropriate.6 Discount usability serves a valuable purpose, but it is not a substitute for a proper user-centered design program.
Usable designs do not come about in a flash of brilliance. Instead, they are based on systematic analysis of end-user needs, workflow development, application of design guidelines and standards, and user testing, driven by dedication to create the most usable interface possible.
Many EHRs remain painful for users. There are many factors, but two common design errors include:
In cognitive psychology, as in medicine, basic research is often transformed into clinical practice. Human-factors psychologists have transformed what we know about human behavior vis-à-vis technology into engineering documents that can be used to describe how a user interface should be designed. How the eye scans a page, how the brain processes a message in a dialog box, and how the keyboard is laid out can be improved based on known research on human performance.7,8 These principles of human performance are accepted and practiced in some disciplines (eg, nuclear power, aviation, medical devices) but remain virtually unknown in medical informatics. Thus, EHRs could be made easier if known human-factors principles were applied.
The EHR is but one application in an ecosystem. Many physicians deal with two or more EHRs throughout the day as well as a myriad of other applications and devices. Many of these applications often have conflicting workflows, icons, labels, commands, and controls. These inconsistencies magnify the likelihood of error. EHRs could be made easier if there were conformance to user interface standards. Standards enable positive transfer of training, reduce learning time, and engender user confidence and improved user performance. User interface standards for medical applications derived from principle and practice exist but are not widely employed (eg, the Microsoft Health Common User Interface [http://www.mscui.net]).
No one sets out to create an EHR that is difficult to use. Yet most EHR developers are unaware of or do not understand the importance of human-factors design principles and applications of user interface standards. Patient safety is compromised when a programmer innocently selects a drop-down field length that will not display full drug names. Developers who violate principles of human performance run the risk of delivering systems that result in poor user performance, increased error, and low user satisfaction.
EHR vendors are often caught up in the kitchen-sink mentality: The more features an EHR application has, the better. Certification and rating systems that only focus on functionality encourage this. Lately, the spotlight has shifted to usability. The Agency for Healthcare Research and Quality has sponsored research by recognized usability experts.9 The National Institute of Standards and Technology and the Office of the National Coordinator for Health Information Technology are also moving to develop a usability framework. This trend should continue.
The article by Corrao et al3 demonstrates that measuring usability can provide insights that will facilitate incremental improvements; few would dispute this. However, an increasing emphasis should be placed on making applications more usable and therefore safer from the outset. This is particularly critical for oncology EHRs, because a majority of these products are used in chemotherapy prescribing and delivery, a process that is associated with high potential for patient harm if safe practices are not followed. Human performance with systems is not arbitrary; high usability can be engineered into the product from the outset. Rather than diagnosing what is wrong with EHRs, we could improve the fitness of EHRs by embracing user-centered design processes, human-factors knowledge, and user interface standards.
Although all authors completed the disclosure declaration, the following author(s) indicated a financial or other interest that is relevant to the subject matter under consideration in this article. Certain relationships marked with a “U” are those for which no compensation was received; those relationships marked with a “C” were compensated. For a detailed description of the disclosure categories, or for more information about ASCO's conflict of interest policy, please refer to the Author Disclosure Declaration and the Disclosures of Potential Conflicts of Interest section in Information for Contributors.
Employment or Leadership Position: Robert M. Schumacher, User Centric (C) Consultant or Advisory Role: Robert M. Schumacher, User Centric (C) Stock Ownership: Robert M. Schumacher, User Centric Honoraria: None Research Funding: None Expert Testimony: None Other Remuneration: None